00:00:00.001 Started by upstream project "autotest-per-patch" build number 127179 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 24320 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.083 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.085 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.113 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.149 Using shallow fetch with depth 1 00:00:00.149 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.149 > git --version # timeout=10 00:00:00.179 > git --version # 'git version 2.39.2' 00:00:00.179 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.211 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.211 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/32/24332/3 # timeout=5 00:00:04.681 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.691 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.702 Checking out Revision 42e00731b22fe9a8063e4b475dece9d4b345521a (FETCH_HEAD) 00:00:04.702 > git config core.sparsecheckout # timeout=10 00:00:04.712 > git read-tree -mu HEAD # timeout=10 00:00:04.732 > git checkout -f 42e00731b22fe9a8063e4b475dece9d4b345521a # timeout=5 00:00:04.754 Commit message: "jjb/autotest: add SPDK_TEST_RAID flag for docker-autotest jobs" 00:00:04.754 > git rev-list --no-walk 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=10 00:00:04.862 [Pipeline] Start of Pipeline 00:00:04.873 [Pipeline] library 00:00:04.875 Loading library shm_lib@master 00:00:04.875 Library shm_lib@master is cached. Copying from home. 00:00:04.893 [Pipeline] node 00:00:04.901 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.902 [Pipeline] { 00:00:04.911 [Pipeline] catchError 00:00:04.912 [Pipeline] { 00:00:04.923 [Pipeline] wrap 00:00:04.931 [Pipeline] { 00:00:04.937 [Pipeline] stage 00:00:04.939 [Pipeline] { (Prologue) 00:00:05.109 [Pipeline] sh 00:00:05.396 + logger -p user.info -t JENKINS-CI 00:00:05.416 [Pipeline] echo 00:00:05.417 Node: CYP9 00:00:05.430 [Pipeline] sh 00:00:05.732 [Pipeline] setCustomBuildProperty 00:00:05.744 [Pipeline] echo 00:00:05.745 Cleanup processes 00:00:05.750 [Pipeline] sh 00:00:06.037 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.037 4103918 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.049 [Pipeline] sh 00:00:06.332 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.332 ++ grep -v 'sudo pgrep' 00:00:06.332 ++ awk '{print $1}' 00:00:06.332 + sudo kill -9 00:00:06.332 + true 00:00:06.345 [Pipeline] cleanWs 00:00:06.354 [WS-CLEANUP] Deleting project workspace... 00:00:06.354 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.360 [WS-CLEANUP] done 00:00:06.363 [Pipeline] setCustomBuildProperty 00:00:06.372 [Pipeline] sh 00:00:06.652 + sudo git config --global --replace-all safe.directory '*' 00:00:06.744 [Pipeline] httpRequest 00:00:06.773 [Pipeline] echo 00:00:06.774 Sorcerer 10.211.164.101 is alive 00:00:06.780 [Pipeline] httpRequest 00:00:06.784 HttpMethod: GET 00:00:06.784 URL: http://10.211.164.101/packages/jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:06.784 Sending request to url: http://10.211.164.101/packages/jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:06.809 Response Code: HTTP/1.1 200 OK 00:00:06.809 Success: Status code 200 is in the accepted range: 200,404 00:00:06.809 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:11.410 [Pipeline] sh 00:00:11.695 + tar --no-same-owner -xf jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:11.710 [Pipeline] httpRequest 00:00:11.725 [Pipeline] echo 00:00:11.726 Sorcerer 10.211.164.101 is alive 00:00:11.732 [Pipeline] httpRequest 00:00:11.737 HttpMethod: GET 00:00:11.737 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:11.738 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:11.757 Response Code: HTTP/1.1 200 OK 00:00:11.757 Success: Status code 200 is in the accepted range: 200,404 00:00:11.758 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:05.918 [Pipeline] sh 00:01:06.207 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:09.520 [Pipeline] sh 00:01:09.806 + git -C spdk log --oneline -n5 00:01:09.806 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:09.806 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:09.806 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:09.806 d005e023b raid: fix empty slot not updated in sb after resize 00:01:09.806 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:09.818 [Pipeline] } 00:01:09.834 [Pipeline] // stage 00:01:09.843 [Pipeline] stage 00:01:09.845 [Pipeline] { (Prepare) 00:01:09.862 [Pipeline] writeFile 00:01:09.877 [Pipeline] sh 00:01:10.162 + logger -p user.info -t JENKINS-CI 00:01:10.174 [Pipeline] sh 00:01:10.458 + logger -p user.info -t JENKINS-CI 00:01:10.471 [Pipeline] sh 00:01:10.757 + cat autorun-spdk.conf 00:01:10.757 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.757 SPDK_TEST_NVMF=1 00:01:10.757 SPDK_TEST_NVME_CLI=1 00:01:10.757 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.757 SPDK_TEST_NVMF_NICS=e810 00:01:10.757 SPDK_TEST_VFIOUSER=1 00:01:10.757 SPDK_RUN_UBSAN=1 00:01:10.757 NET_TYPE=phy 00:01:10.765 RUN_NIGHTLY=0 00:01:10.769 [Pipeline] readFile 00:01:10.792 [Pipeline] withEnv 00:01:10.794 [Pipeline] { 00:01:10.807 [Pipeline] sh 00:01:11.092 + set -ex 00:01:11.092 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:11.093 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.093 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.093 ++ SPDK_TEST_NVMF=1 00:01:11.093 ++ SPDK_TEST_NVME_CLI=1 00:01:11.093 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.093 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.093 ++ SPDK_TEST_VFIOUSER=1 00:01:11.093 ++ SPDK_RUN_UBSAN=1 00:01:11.093 ++ NET_TYPE=phy 00:01:11.093 ++ RUN_NIGHTLY=0 00:01:11.093 + case $SPDK_TEST_NVMF_NICS in 00:01:11.093 + DRIVERS=ice 00:01:11.093 + [[ tcp == \r\d\m\a ]] 00:01:11.093 + [[ -n ice ]] 00:01:11.093 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.093 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:11.093 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:11.093 rmmod: ERROR: Module irdma is not currently loaded 00:01:11.093 rmmod: ERROR: Module i40iw is not currently loaded 00:01:11.093 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:11.093 + true 00:01:11.093 + for D in $DRIVERS 00:01:11.093 + sudo modprobe ice 00:01:11.093 + exit 0 00:01:11.102 [Pipeline] } 00:01:11.121 [Pipeline] // withEnv 00:01:11.126 [Pipeline] } 00:01:11.142 [Pipeline] // stage 00:01:11.150 [Pipeline] catchError 00:01:11.151 [Pipeline] { 00:01:11.163 [Pipeline] timeout 00:01:11.163 Timeout set to expire in 50 min 00:01:11.164 [Pipeline] { 00:01:11.173 [Pipeline] stage 00:01:11.174 [Pipeline] { (Tests) 00:01:11.185 [Pipeline] sh 00:01:11.470 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.470 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.470 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.470 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:11.470 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.470 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.470 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:11.470 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.470 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.470 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.470 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:11.470 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.470 + source /etc/os-release 00:01:11.470 ++ NAME='Fedora Linux' 00:01:11.470 ++ VERSION='38 (Cloud Edition)' 00:01:11.470 ++ ID=fedora 00:01:11.470 ++ VERSION_ID=38 00:01:11.470 ++ VERSION_CODENAME= 00:01:11.470 ++ PLATFORM_ID=platform:f38 00:01:11.470 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:11.470 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:11.470 ++ LOGO=fedora-logo-icon 00:01:11.470 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:11.470 ++ HOME_URL=https://fedoraproject.org/ 00:01:11.470 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:11.470 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:11.470 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:11.470 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:11.470 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:11.470 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:11.470 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:11.470 ++ SUPPORT_END=2024-05-14 00:01:11.470 ++ VARIANT='Cloud Edition' 00:01:11.470 ++ VARIANT_ID=cloud 00:01:11.470 + uname -a 00:01:11.470 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:11.470 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:14.772 Hugepages 00:01:14.772 node hugesize free / total 00:01:14.772 node0 1048576kB 0 / 0 00:01:14.772 node0 2048kB 0 / 0 00:01:14.772 node1 1048576kB 0 / 0 00:01:14.772 node1 2048kB 0 / 0 00:01:14.772 00:01:14.772 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.772 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:14.772 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:14.772 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:14.772 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:14.772 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:14.772 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:14.772 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:14.772 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:14.772 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:14.772 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:14.772 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:14.772 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:14.772 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:14.772 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:14.772 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:14.772 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:14.772 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:14.772 + rm -f /tmp/spdk-ld-path 00:01:14.772 + source autorun-spdk.conf 00:01:14.772 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.772 ++ SPDK_TEST_NVMF=1 00:01:14.772 ++ SPDK_TEST_NVME_CLI=1 00:01:14.772 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.772 ++ SPDK_TEST_NVMF_NICS=e810 00:01:14.772 ++ SPDK_TEST_VFIOUSER=1 00:01:14.772 ++ SPDK_RUN_UBSAN=1 00:01:14.772 ++ NET_TYPE=phy 00:01:14.772 ++ RUN_NIGHTLY=0 00:01:14.772 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.772 + [[ -n '' ]] 00:01:14.772 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.772 + for M in /var/spdk/build-*-manifest.txt 00:01:14.772 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.772 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.772 + for M in /var/spdk/build-*-manifest.txt 00:01:14.772 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.772 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.772 ++ uname 00:01:14.772 + [[ Linux == \L\i\n\u\x ]] 00:01:14.772 + sudo dmesg -T 00:01:14.772 + sudo dmesg --clear 00:01:14.772 + dmesg_pid=4104996 00:01:14.772 + [[ Fedora Linux == FreeBSD ]] 00:01:14.772 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.772 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.772 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.772 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.772 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.772 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.772 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.772 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.772 + sudo dmesg -Tw 00:01:14.772 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.772 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.772 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.772 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.772 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.772 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.772 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.772 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.772 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.772 Test configuration: 00:01:14.772 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.772 SPDK_TEST_NVMF=1 00:01:14.772 SPDK_TEST_NVME_CLI=1 00:01:14.772 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.772 SPDK_TEST_NVMF_NICS=e810 00:01:14.772 SPDK_TEST_VFIOUSER=1 00:01:14.772 SPDK_RUN_UBSAN=1 00:01:14.772 NET_TYPE=phy 00:01:14.772 RUN_NIGHTLY=0 14:57:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:14.772 14:57:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.772 14:57:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.772 14:57:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.772 14:57:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.772 14:57:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.773 14:57:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.773 14:57:06 -- paths/export.sh@5 -- $ export PATH 00:01:14.773 14:57:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.773 14:57:06 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:14.773 14:57:06 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:14.773 14:57:06 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721912226.XXXXXX 00:01:14.773 14:57:06 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721912226.ZHOwto 00:01:14.773 14:57:06 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:14.773 14:57:06 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:14.773 14:57:06 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:14.773 14:57:06 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:14.773 14:57:06 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.773 14:57:06 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:14.773 14:57:06 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:14.773 14:57:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.773 14:57:06 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:14.773 14:57:06 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:14.773 14:57:06 -- pm/common@17 -- $ local monitor 00:01:14.773 14:57:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.773 14:57:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.773 14:57:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.773 14:57:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.773 14:57:06 -- pm/common@21 -- $ date +%s 00:01:14.773 14:57:06 -- pm/common@25 -- $ sleep 1 00:01:14.773 14:57:06 -- pm/common@21 -- $ date +%s 00:01:14.773 14:57:06 -- pm/common@21 -- $ date +%s 00:01:14.773 14:57:06 -- pm/common@21 -- $ date +%s 00:01:14.773 14:57:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721912226 00:01:14.773 14:57:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721912226 00:01:14.773 14:57:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721912226 00:01:14.773 14:57:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721912226 00:01:14.773 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721912226_collect-vmstat.pm.log 00:01:14.773 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721912226_collect-cpu-load.pm.log 00:01:14.773 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721912226_collect-cpu-temp.pm.log 00:01:14.773 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721912226_collect-bmc-pm.bmc.pm.log 00:01:15.714 14:57:07 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:15.714 14:57:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.714 14:57:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.714 14:57:07 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.714 14:57:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.714 Thu Jul 25 12:57:07 PM UTC 2024 00:01:15.714 14:57:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.714 v24.09-pre-321-g704257090 00:01:15.714 14:57:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.714 14:57:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.714 14:57:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.714 14:57:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:15.714 14:57:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:15.714 14:57:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.715 ************************************ 00:01:15.715 START TEST ubsan 00:01:15.715 ************************************ 00:01:15.715 14:57:07 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:15.715 using ubsan 00:01:15.715 00:01:15.715 real 0m0.000s 00:01:15.715 user 0m0.000s 00:01:15.715 sys 0m0.000s 00:01:15.715 14:57:07 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:15.715 14:57:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.715 ************************************ 00:01:15.715 END TEST ubsan 00:01:15.715 ************************************ 00:01:15.715 14:57:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.715 14:57:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.715 14:57:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.715 14:57:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.715 14:57:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.715 14:57:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.715 14:57:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.715 14:57:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.715 14:57:07 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:15.976 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:15.976 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.237 Using 'verbs' RDMA provider 00:01:32.110 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:44.344 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:44.344 Creating mk/config.mk...done. 00:01:44.344 Creating mk/cc.flags.mk...done. 00:01:44.344 Type 'make' to build. 00:01:44.344 14:57:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:44.344 14:57:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:44.344 14:57:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:44.344 14:57:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.344 ************************************ 00:01:44.344 START TEST make 00:01:44.344 ************************************ 00:01:44.344 14:57:35 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:44.344 make[1]: Nothing to be done for 'all'. 00:01:45.726 The Meson build system 00:01:45.726 Version: 1.3.1 00:01:45.726 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:45.726 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.726 Build type: native build 00:01:45.726 Project name: libvfio-user 00:01:45.726 Project version: 0.0.1 00:01:45.726 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:45.727 C linker for the host machine: cc ld.bfd 2.39-16 00:01:45.727 Host machine cpu family: x86_64 00:01:45.727 Host machine cpu: x86_64 00:01:45.727 Run-time dependency threads found: YES 00:01:45.727 Library dl found: YES 00:01:45.727 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:45.727 Run-time dependency json-c found: YES 0.17 00:01:45.727 Run-time dependency cmocka found: YES 1.1.7 00:01:45.727 Program pytest-3 found: NO 00:01:45.727 Program flake8 found: NO 00:01:45.727 Program misspell-fixer found: NO 00:01:45.727 Program restructuredtext-lint found: NO 00:01:45.727 Program valgrind found: YES (/usr/bin/valgrind) 00:01:45.727 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.727 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.727 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.727 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.727 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:45.727 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:45.727 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.727 Build targets in project: 8 00:01:45.727 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:45.727 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:45.727 00:01:45.727 libvfio-user 0.0.1 00:01:45.727 00:01:45.727 User defined options 00:01:45.727 buildtype : debug 00:01:45.727 default_library: shared 00:01:45.727 libdir : /usr/local/lib 00:01:45.727 00:01:45.727 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.727 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.985 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:45.985 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:45.985 [3/37] Compiling C object samples/null.p/null.c.o 00:01:45.985 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:45.985 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:45.985 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:45.985 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:45.985 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:45.985 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:45.985 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:45.985 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:45.985 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:45.985 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:45.985 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:45.985 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:45.985 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:45.985 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:45.985 [18/37] Compiling C object samples/server.p/server.c.o 00:01:45.985 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:45.985 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:45.985 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:45.985 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:45.985 [23/37] Compiling C object samples/client.p/client.c.o 00:01:45.985 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:45.985 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:45.985 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:45.985 [27/37] Linking target samples/client 00:01:45.985 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:45.985 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:45.985 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:45.985 [31/37] Linking target test/unit_tests 00:01:46.244 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:46.244 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:46.244 [34/37] Linking target samples/lspci 00:01:46.244 [35/37] Linking target samples/gpio-pci-idio-16 00:01:46.244 [36/37] Linking target samples/server 00:01:46.244 [37/37] Linking target samples/null 00:01:46.244 INFO: autodetecting backend as ninja 00:01:46.244 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.244 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.505 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:46.505 ninja: no work to do. 00:01:53.139 The Meson build system 00:01:53.139 Version: 1.3.1 00:01:53.139 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:53.139 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:53.139 Build type: native build 00:01:53.139 Program cat found: YES (/usr/bin/cat) 00:01:53.139 Project name: DPDK 00:01:53.139 Project version: 24.03.0 00:01:53.139 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:53.139 C linker for the host machine: cc ld.bfd 2.39-16 00:01:53.139 Host machine cpu family: x86_64 00:01:53.139 Host machine cpu: x86_64 00:01:53.139 Message: ## Building in Developer Mode ## 00:01:53.139 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:53.139 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:53.139 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:53.139 Program python3 found: YES (/usr/bin/python3) 00:01:53.139 Program cat found: YES (/usr/bin/cat) 00:01:53.139 Compiler for C supports arguments -march=native: YES 00:01:53.139 Checking for size of "void *" : 8 00:01:53.139 Checking for size of "void *" : 8 (cached) 00:01:53.139 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:53.139 Library m found: YES 00:01:53.139 Library numa found: YES 00:01:53.139 Has header "numaif.h" : YES 00:01:53.139 Library fdt found: NO 00:01:53.139 Library execinfo found: NO 00:01:53.139 Has header "execinfo.h" : YES 00:01:53.139 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:53.139 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:53.139 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:53.139 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:53.139 Run-time dependency openssl found: YES 3.0.9 00:01:53.139 Run-time dependency libpcap found: YES 1.10.4 00:01:53.139 Has header "pcap.h" with dependency libpcap: YES 00:01:53.139 Compiler for C supports arguments -Wcast-qual: YES 00:01:53.139 Compiler for C supports arguments -Wdeprecated: YES 00:01:53.139 Compiler for C supports arguments -Wformat: YES 00:01:53.139 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:53.139 Compiler for C supports arguments -Wformat-security: NO 00:01:53.139 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.139 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:53.139 Compiler for C supports arguments -Wnested-externs: YES 00:01:53.139 Compiler for C supports arguments -Wold-style-definition: YES 00:01:53.139 Compiler for C supports arguments -Wpointer-arith: YES 00:01:53.139 Compiler for C supports arguments -Wsign-compare: YES 00:01:53.139 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:53.139 Compiler for C supports arguments -Wundef: YES 00:01:53.139 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.139 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:53.139 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:53.139 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.139 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:53.139 Program objdump found: YES (/usr/bin/objdump) 00:01:53.139 Compiler for C supports arguments -mavx512f: YES 00:01:53.139 Checking if "AVX512 checking" compiles: YES 00:01:53.139 Fetching value of define "__SSE4_2__" : 1 00:01:53.139 Fetching value of define "__AES__" : 1 00:01:53.139 Fetching value of define "__AVX__" : 1 00:01:53.139 Fetching value of define "__AVX2__" : 1 00:01:53.139 Fetching value of define "__AVX512BW__" : 1 00:01:53.139 Fetching value of define "__AVX512CD__" : 1 00:01:53.139 Fetching value of define "__AVX512DQ__" : 1 00:01:53.139 Fetching value of define "__AVX512F__" : 1 00:01:53.139 Fetching value of define "__AVX512VL__" : 1 00:01:53.139 Fetching value of define "__PCLMUL__" : 1 00:01:53.139 Fetching value of define "__RDRND__" : 1 00:01:53.139 Fetching value of define "__RDSEED__" : 1 00:01:53.139 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:53.139 Fetching value of define "__znver1__" : (undefined) 00:01:53.139 Fetching value of define "__znver2__" : (undefined) 00:01:53.139 Fetching value of define "__znver3__" : (undefined) 00:01:53.139 Fetching value of define "__znver4__" : (undefined) 00:01:53.139 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:53.139 Message: lib/log: Defining dependency "log" 00:01:53.139 Message: lib/kvargs: Defining dependency "kvargs" 00:01:53.139 Message: lib/telemetry: Defining dependency "telemetry" 00:01:53.139 Checking for function "getentropy" : NO 00:01:53.139 Message: lib/eal: Defining dependency "eal" 00:01:53.139 Message: lib/ring: Defining dependency "ring" 00:01:53.139 Message: lib/rcu: Defining dependency "rcu" 00:01:53.139 Message: lib/mempool: Defining dependency "mempool" 00:01:53.139 Message: lib/mbuf: Defining dependency "mbuf" 00:01:53.139 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:53.139 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:53.139 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:53.139 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:53.139 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:53.139 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:53.139 Compiler for C supports arguments -mpclmul: YES 00:01:53.139 Compiler for C supports arguments -maes: YES 00:01:53.139 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.139 Compiler for C supports arguments -mavx512bw: YES 00:01:53.139 Compiler for C supports arguments -mavx512dq: YES 00:01:53.139 Compiler for C supports arguments -mavx512vl: YES 00:01:53.139 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:53.139 Compiler for C supports arguments -mavx2: YES 00:01:53.139 Compiler for C supports arguments -mavx: YES 00:01:53.139 Message: lib/net: Defining dependency "net" 00:01:53.139 Message: lib/meter: Defining dependency "meter" 00:01:53.139 Message: lib/ethdev: Defining dependency "ethdev" 00:01:53.139 Message: lib/pci: Defining dependency "pci" 00:01:53.139 Message: lib/cmdline: Defining dependency "cmdline" 00:01:53.139 Message: lib/hash: Defining dependency "hash" 00:01:53.139 Message: lib/timer: Defining dependency "timer" 00:01:53.139 Message: lib/compressdev: Defining dependency "compressdev" 00:01:53.139 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:53.139 Message: lib/dmadev: Defining dependency "dmadev" 00:01:53.139 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:53.139 Message: lib/power: Defining dependency "power" 00:01:53.139 Message: lib/reorder: Defining dependency "reorder" 00:01:53.139 Message: lib/security: Defining dependency "security" 00:01:53.139 Has header "linux/userfaultfd.h" : YES 00:01:53.139 Has header "linux/vduse.h" : YES 00:01:53.139 Message: lib/vhost: Defining dependency "vhost" 00:01:53.139 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.139 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.139 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.139 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.139 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:53.140 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:53.140 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:53.140 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:53.140 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:53.140 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:53.140 Program doxygen found: YES (/usr/bin/doxygen) 00:01:53.140 Configuring doxy-api-html.conf using configuration 00:01:53.140 Configuring doxy-api-man.conf using configuration 00:01:53.140 Program mandb found: YES (/usr/bin/mandb) 00:01:53.140 Program sphinx-build found: NO 00:01:53.140 Configuring rte_build_config.h using configuration 00:01:53.140 Message: 00:01:53.140 ================= 00:01:53.140 Applications Enabled 00:01:53.140 ================= 00:01:53.140 00:01:53.140 apps: 00:01:53.140 00:01:53.140 00:01:53.140 Message: 00:01:53.140 ================= 00:01:53.140 Libraries Enabled 00:01:53.140 ================= 00:01:53.140 00:01:53.140 libs: 00:01:53.140 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.140 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:53.140 cryptodev, dmadev, power, reorder, security, vhost, 00:01:53.140 00:01:53.140 Message: 00:01:53.140 =============== 00:01:53.140 Drivers Enabled 00:01:53.140 =============== 00:01:53.140 00:01:53.140 common: 00:01:53.140 00:01:53.140 bus: 00:01:53.140 pci, vdev, 00:01:53.140 mempool: 00:01:53.140 ring, 00:01:53.140 dma: 00:01:53.140 00:01:53.140 net: 00:01:53.140 00:01:53.140 crypto: 00:01:53.140 00:01:53.140 compress: 00:01:53.140 00:01:53.140 vdpa: 00:01:53.140 00:01:53.140 00:01:53.140 Message: 00:01:53.140 ================= 00:01:53.140 Content Skipped 00:01:53.140 ================= 00:01:53.140 00:01:53.140 apps: 00:01:53.140 dumpcap: explicitly disabled via build config 00:01:53.140 graph: explicitly disabled via build config 00:01:53.140 pdump: explicitly disabled via build config 00:01:53.140 proc-info: explicitly disabled via build config 00:01:53.140 test-acl: explicitly disabled via build config 00:01:53.140 test-bbdev: explicitly disabled via build config 00:01:53.140 test-cmdline: explicitly disabled via build config 00:01:53.140 test-compress-perf: explicitly disabled via build config 00:01:53.140 test-crypto-perf: explicitly disabled via build config 00:01:53.140 test-dma-perf: explicitly disabled via build config 00:01:53.140 test-eventdev: explicitly disabled via build config 00:01:53.140 test-fib: explicitly disabled via build config 00:01:53.140 test-flow-perf: explicitly disabled via build config 00:01:53.140 test-gpudev: explicitly disabled via build config 00:01:53.140 test-mldev: explicitly disabled via build config 00:01:53.140 test-pipeline: explicitly disabled via build config 00:01:53.140 test-pmd: explicitly disabled via build config 00:01:53.140 test-regex: explicitly disabled via build config 00:01:53.140 test-sad: explicitly disabled via build config 00:01:53.140 test-security-perf: explicitly disabled via build config 00:01:53.140 00:01:53.140 libs: 00:01:53.140 argparse: explicitly disabled via build config 00:01:53.140 metrics: explicitly disabled via build config 00:01:53.140 acl: explicitly disabled via build config 00:01:53.140 bbdev: explicitly disabled via build config 00:01:53.140 bitratestats: explicitly disabled via build config 00:01:53.140 bpf: explicitly disabled via build config 00:01:53.140 cfgfile: explicitly disabled via build config 00:01:53.140 distributor: explicitly disabled via build config 00:01:53.140 efd: explicitly disabled via build config 00:01:53.140 eventdev: explicitly disabled via build config 00:01:53.140 dispatcher: explicitly disabled via build config 00:01:53.140 gpudev: explicitly disabled via build config 00:01:53.140 gro: explicitly disabled via build config 00:01:53.140 gso: explicitly disabled via build config 00:01:53.140 ip_frag: explicitly disabled via build config 00:01:53.140 jobstats: explicitly disabled via build config 00:01:53.140 latencystats: explicitly disabled via build config 00:01:53.140 lpm: explicitly disabled via build config 00:01:53.140 member: explicitly disabled via build config 00:01:53.140 pcapng: explicitly disabled via build config 00:01:53.140 rawdev: explicitly disabled via build config 00:01:53.140 regexdev: explicitly disabled via build config 00:01:53.140 mldev: explicitly disabled via build config 00:01:53.140 rib: explicitly disabled via build config 00:01:53.140 sched: explicitly disabled via build config 00:01:53.140 stack: explicitly disabled via build config 00:01:53.140 ipsec: explicitly disabled via build config 00:01:53.140 pdcp: explicitly disabled via build config 00:01:53.140 fib: explicitly disabled via build config 00:01:53.140 port: explicitly disabled via build config 00:01:53.140 pdump: explicitly disabled via build config 00:01:53.140 table: explicitly disabled via build config 00:01:53.140 pipeline: explicitly disabled via build config 00:01:53.140 graph: explicitly disabled via build config 00:01:53.140 node: explicitly disabled via build config 00:01:53.140 00:01:53.140 drivers: 00:01:53.140 common/cpt: not in enabled drivers build config 00:01:53.140 common/dpaax: not in enabled drivers build config 00:01:53.140 common/iavf: not in enabled drivers build config 00:01:53.140 common/idpf: not in enabled drivers build config 00:01:53.140 common/ionic: not in enabled drivers build config 00:01:53.140 common/mvep: not in enabled drivers build config 00:01:53.140 common/octeontx: not in enabled drivers build config 00:01:53.140 bus/auxiliary: not in enabled drivers build config 00:01:53.140 bus/cdx: not in enabled drivers build config 00:01:53.140 bus/dpaa: not in enabled drivers build config 00:01:53.140 bus/fslmc: not in enabled drivers build config 00:01:53.140 bus/ifpga: not in enabled drivers build config 00:01:53.140 bus/platform: not in enabled drivers build config 00:01:53.140 bus/uacce: not in enabled drivers build config 00:01:53.140 bus/vmbus: not in enabled drivers build config 00:01:53.140 common/cnxk: not in enabled drivers build config 00:01:53.140 common/mlx5: not in enabled drivers build config 00:01:53.140 common/nfp: not in enabled drivers build config 00:01:53.140 common/nitrox: not in enabled drivers build config 00:01:53.140 common/qat: not in enabled drivers build config 00:01:53.140 common/sfc_efx: not in enabled drivers build config 00:01:53.140 mempool/bucket: not in enabled drivers build config 00:01:53.140 mempool/cnxk: not in enabled drivers build config 00:01:53.140 mempool/dpaa: not in enabled drivers build config 00:01:53.140 mempool/dpaa2: not in enabled drivers build config 00:01:53.140 mempool/octeontx: not in enabled drivers build config 00:01:53.140 mempool/stack: not in enabled drivers build config 00:01:53.140 dma/cnxk: not in enabled drivers build config 00:01:53.140 dma/dpaa: not in enabled drivers build config 00:01:53.140 dma/dpaa2: not in enabled drivers build config 00:01:53.140 dma/hisilicon: not in enabled drivers build config 00:01:53.140 dma/idxd: not in enabled drivers build config 00:01:53.140 dma/ioat: not in enabled drivers build config 00:01:53.140 dma/skeleton: not in enabled drivers build config 00:01:53.140 net/af_packet: not in enabled drivers build config 00:01:53.140 net/af_xdp: not in enabled drivers build config 00:01:53.140 net/ark: not in enabled drivers build config 00:01:53.140 net/atlantic: not in enabled drivers build config 00:01:53.140 net/avp: not in enabled drivers build config 00:01:53.140 net/axgbe: not in enabled drivers build config 00:01:53.140 net/bnx2x: not in enabled drivers build config 00:01:53.140 net/bnxt: not in enabled drivers build config 00:01:53.140 net/bonding: not in enabled drivers build config 00:01:53.140 net/cnxk: not in enabled drivers build config 00:01:53.140 net/cpfl: not in enabled drivers build config 00:01:53.140 net/cxgbe: not in enabled drivers build config 00:01:53.140 net/dpaa: not in enabled drivers build config 00:01:53.140 net/dpaa2: not in enabled drivers build config 00:01:53.140 net/e1000: not in enabled drivers build config 00:01:53.140 net/ena: not in enabled drivers build config 00:01:53.140 net/enetc: not in enabled drivers build config 00:01:53.140 net/enetfec: not in enabled drivers build config 00:01:53.140 net/enic: not in enabled drivers build config 00:01:53.140 net/failsafe: not in enabled drivers build config 00:01:53.140 net/fm10k: not in enabled drivers build config 00:01:53.140 net/gve: not in enabled drivers build config 00:01:53.140 net/hinic: not in enabled drivers build config 00:01:53.140 net/hns3: not in enabled drivers build config 00:01:53.140 net/i40e: not in enabled drivers build config 00:01:53.140 net/iavf: not in enabled drivers build config 00:01:53.140 net/ice: not in enabled drivers build config 00:01:53.140 net/idpf: not in enabled drivers build config 00:01:53.140 net/igc: not in enabled drivers build config 00:01:53.140 net/ionic: not in enabled drivers build config 00:01:53.140 net/ipn3ke: not in enabled drivers build config 00:01:53.140 net/ixgbe: not in enabled drivers build config 00:01:53.140 net/mana: not in enabled drivers build config 00:01:53.140 net/memif: not in enabled drivers build config 00:01:53.140 net/mlx4: not in enabled drivers build config 00:01:53.140 net/mlx5: not in enabled drivers build config 00:01:53.140 net/mvneta: not in enabled drivers build config 00:01:53.140 net/mvpp2: not in enabled drivers build config 00:01:53.140 net/netvsc: not in enabled drivers build config 00:01:53.140 net/nfb: not in enabled drivers build config 00:01:53.140 net/nfp: not in enabled drivers build config 00:01:53.140 net/ngbe: not in enabled drivers build config 00:01:53.140 net/null: not in enabled drivers build config 00:01:53.140 net/octeontx: not in enabled drivers build config 00:01:53.140 net/octeon_ep: not in enabled drivers build config 00:01:53.140 net/pcap: not in enabled drivers build config 00:01:53.140 net/pfe: not in enabled drivers build config 00:01:53.140 net/qede: not in enabled drivers build config 00:01:53.140 net/ring: not in enabled drivers build config 00:01:53.140 net/sfc: not in enabled drivers build config 00:01:53.140 net/softnic: not in enabled drivers build config 00:01:53.141 net/tap: not in enabled drivers build config 00:01:53.141 net/thunderx: not in enabled drivers build config 00:01:53.141 net/txgbe: not in enabled drivers build config 00:01:53.141 net/vdev_netvsc: not in enabled drivers build config 00:01:53.141 net/vhost: not in enabled drivers build config 00:01:53.141 net/virtio: not in enabled drivers build config 00:01:53.141 net/vmxnet3: not in enabled drivers build config 00:01:53.141 raw/*: missing internal dependency, "rawdev" 00:01:53.141 crypto/armv8: not in enabled drivers build config 00:01:53.141 crypto/bcmfs: not in enabled drivers build config 00:01:53.141 crypto/caam_jr: not in enabled drivers build config 00:01:53.141 crypto/ccp: not in enabled drivers build config 00:01:53.141 crypto/cnxk: not in enabled drivers build config 00:01:53.141 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.141 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.141 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.141 crypto/mlx5: not in enabled drivers build config 00:01:53.141 crypto/mvsam: not in enabled drivers build config 00:01:53.141 crypto/nitrox: not in enabled drivers build config 00:01:53.141 crypto/null: not in enabled drivers build config 00:01:53.141 crypto/octeontx: not in enabled drivers build config 00:01:53.141 crypto/openssl: not in enabled drivers build config 00:01:53.141 crypto/scheduler: not in enabled drivers build config 00:01:53.141 crypto/uadk: not in enabled drivers build config 00:01:53.141 crypto/virtio: not in enabled drivers build config 00:01:53.141 compress/isal: not in enabled drivers build config 00:01:53.141 compress/mlx5: not in enabled drivers build config 00:01:53.141 compress/nitrox: not in enabled drivers build config 00:01:53.141 compress/octeontx: not in enabled drivers build config 00:01:53.141 compress/zlib: not in enabled drivers build config 00:01:53.141 regex/*: missing internal dependency, "regexdev" 00:01:53.141 ml/*: missing internal dependency, "mldev" 00:01:53.141 vdpa/ifc: not in enabled drivers build config 00:01:53.141 vdpa/mlx5: not in enabled drivers build config 00:01:53.141 vdpa/nfp: not in enabled drivers build config 00:01:53.141 vdpa/sfc: not in enabled drivers build config 00:01:53.141 event/*: missing internal dependency, "eventdev" 00:01:53.141 baseband/*: missing internal dependency, "bbdev" 00:01:53.141 gpu/*: missing internal dependency, "gpudev" 00:01:53.141 00:01:53.141 00:01:53.141 Build targets in project: 84 00:01:53.141 00:01:53.141 DPDK 24.03.0 00:01:53.141 00:01:53.141 User defined options 00:01:53.141 buildtype : debug 00:01:53.141 default_library : shared 00:01:53.141 libdir : lib 00:01:53.141 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:53.141 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:53.141 c_link_args : 00:01:53.141 cpu_instruction_set: native 00:01:53.141 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:53.141 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:53.141 enable_docs : false 00:01:53.141 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:53.141 enable_kmods : false 00:01:53.141 max_lcores : 128 00:01:53.141 tests : false 00:01:53.141 00:01:53.141 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.141 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:53.408 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.408 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.408 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:53.408 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:53.408 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:53.408 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.408 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.408 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:53.408 [9/267] Linking static target lib/librte_kvargs.a 00:01:53.408 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:53.408 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.408 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:53.408 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.408 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.408 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.408 [16/267] Linking static target lib/librte_log.a 00:01:53.408 [17/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.408 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.408 [19/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.408 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:53.408 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:53.408 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:53.408 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.672 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:53.672 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:53.672 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:53.672 [27/267] Linking static target lib/librte_pci.a 00:01:53.672 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:53.672 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:53.672 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:53.672 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:53.672 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:53.672 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:53.672 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:53.672 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:53.672 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.672 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.672 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.672 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.931 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.931 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.931 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.931 [43/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.931 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.931 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.931 [46/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.931 [47/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:53.931 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.931 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:53.931 [50/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:53.931 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:53.931 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:53.931 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.931 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:53.931 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.931 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.931 [57/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:53.931 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.931 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.931 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.931 [61/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.931 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.931 [63/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.931 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.931 [65/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.931 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.931 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.931 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.932 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:53.932 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.932 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.932 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.932 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:53.932 [74/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.932 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.932 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.932 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.932 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.932 [79/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.932 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.932 [81/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.932 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.932 [83/267] Linking static target lib/librte_telemetry.a 00:01:53.932 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.932 [85/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.932 [86/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.932 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.932 [88/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.932 [89/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:53.932 [90/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.932 [91/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.932 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.932 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.932 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:53.932 [95/267] Linking static target lib/librte_meter.a 00:01:53.932 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:53.932 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.932 [98/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.932 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.932 [100/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.932 [101/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.932 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.932 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.932 [104/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.932 [105/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.932 [106/267] Linking static target lib/librte_ring.a 00:01:53.932 [107/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.932 [108/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.932 [109/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.932 [110/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.932 [111/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.932 [112/267] Linking static target lib/librte_cmdline.a 00:01:53.932 [113/267] Linking static target lib/librte_timer.a 00:01:53.932 [114/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:53.932 [115/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.932 [116/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.932 [117/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.932 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:53.932 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.932 [120/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.932 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.932 [122/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.932 [123/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.932 [124/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:53.932 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.932 [126/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.932 [127/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.932 [128/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.932 [129/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.932 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.932 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.932 [132/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.932 [133/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.932 [134/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.932 [135/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.932 [136/267] Linking static target lib/librte_dmadev.a 00:01:53.932 [137/267] Linking static target lib/librte_compressdev.a 00:01:53.932 [138/267] Linking static target lib/librte_mempool.a 00:01:53.932 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.932 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.932 [141/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.932 [142/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.192 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:54.192 [144/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.192 [145/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:54.192 [146/267] Linking static target lib/librte_rcu.a 00:01:54.192 [147/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:54.192 [148/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.192 [149/267] Linking static target lib/librte_power.a 00:01:54.192 [150/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.192 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:54.192 [152/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.192 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:54.192 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:54.192 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.192 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.192 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:54.192 [158/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.192 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:54.192 [160/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.192 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:54.192 [162/267] Linking target lib/librte_log.so.24.1 00:01:54.192 [163/267] Linking static target lib/librte_reorder.a 00:01:54.192 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:54.192 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.192 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:54.192 [167/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:54.192 [168/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.192 [169/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.192 [170/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:54.192 [171/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:54.192 [172/267] Linking static target lib/librte_eal.a 00:01:54.192 [173/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.192 [174/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.192 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:54.192 [176/267] Linking static target lib/librte_net.a 00:01:54.192 [177/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.193 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.193 [179/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.193 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.193 [181/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.193 [182/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.193 [183/267] Linking static target drivers/librte_bus_vdev.a 00:01:54.193 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.193 [185/267] Linking static target lib/librte_security.a 00:01:54.193 [186/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.193 [187/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:54.193 [188/267] Linking target lib/librte_kvargs.so.24.1 00:01:54.193 [189/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.453 [190/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.453 [191/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:54.453 [192/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.453 [193/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.453 [194/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.453 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.453 [196/267] Linking static target lib/librte_mbuf.a 00:01:54.453 [197/267] Linking static target drivers/librte_bus_pci.a 00:01:54.453 [198/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:54.453 [199/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.453 [200/267] Linking static target lib/librte_hash.a 00:01:54.453 [201/267] Linking static target drivers/librte_mempool_ring.a 00:01:54.453 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.453 [203/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.453 [204/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:54.453 [205/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.453 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.453 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:54.453 [208/267] Linking target lib/librte_telemetry.so.24.1 00:01:54.453 [209/267] Linking static target lib/librte_cryptodev.a 00:01:54.453 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:54.453 [211/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.453 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.714 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.714 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:54.714 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.714 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.975 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.975 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.975 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.975 [220/267] Linking static target lib/librte_ethdev.a 00:01:54.975 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.975 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.236 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.236 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.236 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.236 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.181 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.181 [228/267] Linking static target lib/librte_vhost.a 00:01:56.753 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.669 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.333 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.906 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.167 [233/267] Linking target lib/librte_eal.so.24.1 00:02:06.167 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:06.167 [235/267] Linking target lib/librte_pci.so.24.1 00:02:06.167 [236/267] Linking target lib/librte_meter.so.24.1 00:02:06.167 [237/267] Linking target lib/librte_ring.so.24.1 00:02:06.167 [238/267] Linking target lib/librte_timer.so.24.1 00:02:06.167 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:06.167 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:06.428 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:06.428 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:06.428 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:06.428 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:06.428 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:06.428 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:06.428 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:06.428 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:06.689 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:06.689 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:06.689 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:06.689 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:06.689 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:06.951 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:06.951 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:06.951 [256/267] Linking target lib/librte_net.so.24.1 00:02:06.951 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:06.951 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:06.951 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:06.951 [260/267] Linking target lib/librte_hash.so.24.1 00:02:06.951 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:06.951 [262/267] Linking target lib/librte_security.so.24.1 00:02:06.951 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:07.212 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:07.212 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:07.212 [266/267] Linking target lib/librte_power.so.24.1 00:02:07.212 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:07.212 INFO: autodetecting backend as ninja 00:02:07.212 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:08.598 CC lib/log/log.o 00:02:08.598 CC lib/log/log_flags.o 00:02:08.598 CC lib/log/log_deprecated.o 00:02:08.598 CC lib/ut/ut.o 00:02:08.598 CC lib/ut_mock/mock.o 00:02:08.598 LIB libspdk_log.a 00:02:08.598 LIB libspdk_ut.a 00:02:08.598 LIB libspdk_ut_mock.a 00:02:08.598 SO libspdk_log.so.7.0 00:02:08.598 SO libspdk_ut.so.2.0 00:02:08.598 SO libspdk_ut_mock.so.6.0 00:02:08.598 SYMLINK libspdk_log.so 00:02:08.598 SYMLINK libspdk_ut.so 00:02:08.598 SYMLINK libspdk_ut_mock.so 00:02:08.859 CC lib/util/base64.o 00:02:08.859 CC lib/util/bit_array.o 00:02:08.859 CC lib/util/cpuset.o 00:02:08.859 CC lib/util/crc16.o 00:02:08.859 CC lib/util/crc32.o 00:02:08.859 CC lib/ioat/ioat.o 00:02:08.859 CC lib/util/crc32c.o 00:02:08.859 CC lib/util/crc32_ieee.o 00:02:08.859 CC lib/util/crc64.o 00:02:08.859 CC lib/util/dif.o 00:02:08.859 CC lib/util/fd.o 00:02:08.859 CC lib/util/hexlify.o 00:02:08.859 CC lib/util/fd_group.o 00:02:08.859 CC lib/util/file.o 00:02:08.859 CC lib/util/iov.o 00:02:08.859 CC lib/util/math.o 00:02:08.859 CC lib/util/net.o 00:02:08.859 CC lib/util/pipe.o 00:02:08.859 CC lib/util/strerror_tls.o 00:02:08.859 CC lib/util/string.o 00:02:09.120 CC lib/util/uuid.o 00:02:09.120 CC lib/util/xor.o 00:02:09.120 CC lib/util/zipf.o 00:02:09.120 CXX lib/trace_parser/trace.o 00:02:09.120 CC lib/dma/dma.o 00:02:09.120 CC lib/vfio_user/host/vfio_user_pci.o 00:02:09.120 CC lib/vfio_user/host/vfio_user.o 00:02:09.120 LIB libspdk_dma.a 00:02:09.382 SO libspdk_dma.so.4.0 00:02:09.382 LIB libspdk_ioat.a 00:02:09.382 SO libspdk_ioat.so.7.0 00:02:09.382 SYMLINK libspdk_dma.so 00:02:09.382 SYMLINK libspdk_ioat.so 00:02:09.382 LIB libspdk_vfio_user.a 00:02:09.382 SO libspdk_vfio_user.so.5.0 00:02:09.382 LIB libspdk_util.a 00:02:09.643 SYMLINK libspdk_vfio_user.so 00:02:09.643 SO libspdk_util.so.10.0 00:02:09.643 SYMLINK libspdk_util.so 00:02:09.933 LIB libspdk_trace_parser.a 00:02:09.933 SO libspdk_trace_parser.so.5.0 00:02:09.933 SYMLINK libspdk_trace_parser.so 00:02:09.933 CC lib/env_dpdk/env.o 00:02:09.933 CC lib/env_dpdk/pci.o 00:02:09.933 CC lib/env_dpdk/memory.o 00:02:09.933 CC lib/env_dpdk/init.o 00:02:09.933 CC lib/env_dpdk/threads.o 00:02:09.933 CC lib/env_dpdk/pci_ioat.o 00:02:09.933 CC lib/env_dpdk/pci_virtio.o 00:02:09.933 CC lib/env_dpdk/pci_vmd.o 00:02:09.933 CC lib/env_dpdk/pci_idxd.o 00:02:09.933 CC lib/env_dpdk/sigbus_handler.o 00:02:09.933 CC lib/env_dpdk/pci_event.o 00:02:09.933 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:09.933 CC lib/env_dpdk/pci_dpdk.o 00:02:09.933 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:09.933 CC lib/rdma_utils/rdma_utils.o 00:02:09.933 CC lib/conf/conf.o 00:02:09.933 CC lib/vmd/vmd.o 00:02:09.933 CC lib/json/json_parse.o 00:02:10.194 CC lib/vmd/led.o 00:02:10.194 CC lib/json/json_util.o 00:02:10.194 CC lib/rdma_provider/common.o 00:02:10.194 CC lib/json/json_write.o 00:02:10.194 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:10.194 CC lib/idxd/idxd.o 00:02:10.194 CC lib/idxd/idxd_user.o 00:02:10.194 CC lib/idxd/idxd_kernel.o 00:02:10.194 LIB libspdk_rdma_provider.a 00:02:10.455 LIB libspdk_conf.a 00:02:10.455 SO libspdk_rdma_provider.so.6.0 00:02:10.455 LIB libspdk_rdma_utils.a 00:02:10.455 SO libspdk_conf.so.6.0 00:02:10.455 LIB libspdk_json.a 00:02:10.455 SO libspdk_rdma_utils.so.1.0 00:02:10.455 SYMLINK libspdk_rdma_provider.so 00:02:10.455 SO libspdk_json.so.6.0 00:02:10.455 SYMLINK libspdk_conf.so 00:02:10.455 SYMLINK libspdk_rdma_utils.so 00:02:10.455 SYMLINK libspdk_json.so 00:02:10.715 LIB libspdk_idxd.a 00:02:10.716 SO libspdk_idxd.so.12.0 00:02:10.716 LIB libspdk_vmd.a 00:02:10.716 SO libspdk_vmd.so.6.0 00:02:10.716 SYMLINK libspdk_idxd.so 00:02:10.716 SYMLINK libspdk_vmd.so 00:02:10.976 CC lib/jsonrpc/jsonrpc_server.o 00:02:10.976 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:10.976 CC lib/jsonrpc/jsonrpc_client.o 00:02:10.976 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:11.237 LIB libspdk_jsonrpc.a 00:02:11.237 SO libspdk_jsonrpc.so.6.0 00:02:11.237 SYMLINK libspdk_jsonrpc.so 00:02:11.237 LIB libspdk_env_dpdk.a 00:02:11.237 SO libspdk_env_dpdk.so.15.0 00:02:11.497 SYMLINK libspdk_env_dpdk.so 00:02:11.497 CC lib/rpc/rpc.o 00:02:11.758 LIB libspdk_rpc.a 00:02:11.758 SO libspdk_rpc.so.6.0 00:02:12.020 SYMLINK libspdk_rpc.so 00:02:12.281 CC lib/notify/notify.o 00:02:12.281 CC lib/notify/notify_rpc.o 00:02:12.281 CC lib/keyring/keyring.o 00:02:12.281 CC lib/keyring/keyring_rpc.o 00:02:12.281 CC lib/trace/trace.o 00:02:12.281 CC lib/trace/trace_rpc.o 00:02:12.281 CC lib/trace/trace_flags.o 00:02:12.281 LIB libspdk_notify.a 00:02:12.541 SO libspdk_notify.so.6.0 00:02:12.541 LIB libspdk_keyring.a 00:02:12.541 LIB libspdk_trace.a 00:02:12.541 SYMLINK libspdk_notify.so 00:02:12.541 SO libspdk_keyring.so.1.0 00:02:12.541 SO libspdk_trace.so.10.0 00:02:12.541 SYMLINK libspdk_keyring.so 00:02:12.541 SYMLINK libspdk_trace.so 00:02:13.111 CC lib/sock/sock.o 00:02:13.111 CC lib/sock/sock_rpc.o 00:02:13.111 CC lib/thread/thread.o 00:02:13.111 CC lib/thread/iobuf.o 00:02:13.371 LIB libspdk_sock.a 00:02:13.371 SO libspdk_sock.so.10.0 00:02:13.371 SYMLINK libspdk_sock.so 00:02:13.943 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:13.943 CC lib/nvme/nvme_ctrlr.o 00:02:13.943 CC lib/nvme/nvme_fabric.o 00:02:13.943 CC lib/nvme/nvme_ns_cmd.o 00:02:13.943 CC lib/nvme/nvme_ns.o 00:02:13.943 CC lib/nvme/nvme_pcie_common.o 00:02:13.943 CC lib/nvme/nvme_pcie.o 00:02:13.943 CC lib/nvme/nvme_qpair.o 00:02:13.943 CC lib/nvme/nvme.o 00:02:13.943 CC lib/nvme/nvme_quirks.o 00:02:13.943 CC lib/nvme/nvme_transport.o 00:02:13.943 CC lib/nvme/nvme_discovery.o 00:02:13.943 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:13.943 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:13.943 CC lib/nvme/nvme_tcp.o 00:02:13.943 CC lib/nvme/nvme_opal.o 00:02:13.943 CC lib/nvme/nvme_io_msg.o 00:02:13.943 CC lib/nvme/nvme_poll_group.o 00:02:13.943 CC lib/nvme/nvme_zns.o 00:02:13.943 CC lib/nvme/nvme_stubs.o 00:02:13.943 CC lib/nvme/nvme_vfio_user.o 00:02:13.943 CC lib/nvme/nvme_auth.o 00:02:13.943 CC lib/nvme/nvme_cuse.o 00:02:13.943 CC lib/nvme/nvme_rdma.o 00:02:14.204 LIB libspdk_thread.a 00:02:14.204 SO libspdk_thread.so.10.1 00:02:14.465 SYMLINK libspdk_thread.so 00:02:14.726 CC lib/vfu_tgt/tgt_endpoint.o 00:02:14.726 CC lib/vfu_tgt/tgt_rpc.o 00:02:14.726 CC lib/blob/blobstore.o 00:02:14.726 CC lib/blob/request.o 00:02:14.726 CC lib/blob/zeroes.o 00:02:14.726 CC lib/blob/blob_bs_dev.o 00:02:14.726 CC lib/init/json_config.o 00:02:14.726 CC lib/init/subsystem_rpc.o 00:02:14.726 CC lib/init/subsystem.o 00:02:14.726 CC lib/accel/accel.o 00:02:14.726 CC lib/init/rpc.o 00:02:14.726 CC lib/virtio/virtio.o 00:02:14.726 CC lib/accel/accel_rpc.o 00:02:14.726 CC lib/virtio/virtio_vhost_user.o 00:02:14.726 CC lib/virtio/virtio_vfio_user.o 00:02:14.726 CC lib/accel/accel_sw.o 00:02:14.726 CC lib/virtio/virtio_pci.o 00:02:14.988 LIB libspdk_init.a 00:02:14.988 LIB libspdk_vfu_tgt.a 00:02:14.988 SO libspdk_init.so.5.0 00:02:14.988 SO libspdk_vfu_tgt.so.3.0 00:02:14.988 SYMLINK libspdk_init.so 00:02:14.988 LIB libspdk_virtio.a 00:02:14.988 SO libspdk_virtio.so.7.0 00:02:14.988 SYMLINK libspdk_vfu_tgt.so 00:02:15.249 SYMLINK libspdk_virtio.so 00:02:15.249 CC lib/event/app.o 00:02:15.249 CC lib/event/reactor.o 00:02:15.249 CC lib/event/log_rpc.o 00:02:15.249 CC lib/event/app_rpc.o 00:02:15.249 CC lib/event/scheduler_static.o 00:02:15.509 LIB libspdk_accel.a 00:02:15.510 SO libspdk_accel.so.16.0 00:02:15.510 LIB libspdk_nvme.a 00:02:15.770 SYMLINK libspdk_accel.so 00:02:15.770 SO libspdk_nvme.so.13.1 00:02:15.770 LIB libspdk_event.a 00:02:15.770 SO libspdk_event.so.14.0 00:02:15.770 SYMLINK libspdk_event.so 00:02:16.031 CC lib/bdev/bdev.o 00:02:16.031 CC lib/bdev/bdev_rpc.o 00:02:16.031 CC lib/bdev/bdev_zone.o 00:02:16.031 CC lib/bdev/part.o 00:02:16.031 CC lib/bdev/scsi_nvme.o 00:02:16.031 SYMLINK libspdk_nvme.so 00:02:17.418 LIB libspdk_blob.a 00:02:17.418 SO libspdk_blob.so.11.0 00:02:17.418 SYMLINK libspdk_blob.so 00:02:17.678 CC lib/lvol/lvol.o 00:02:17.678 CC lib/blobfs/blobfs.o 00:02:17.678 CC lib/blobfs/tree.o 00:02:18.251 LIB libspdk_bdev.a 00:02:18.251 SO libspdk_bdev.so.16.0 00:02:18.251 SYMLINK libspdk_bdev.so 00:02:18.513 LIB libspdk_blobfs.a 00:02:18.513 SO libspdk_blobfs.so.10.0 00:02:18.513 LIB libspdk_lvol.a 00:02:18.513 SO libspdk_lvol.so.10.0 00:02:18.513 SYMLINK libspdk_blobfs.so 00:02:18.773 SYMLINK libspdk_lvol.so 00:02:18.773 CC lib/nvmf/ctrlr_discovery.o 00:02:18.773 CC lib/nvmf/ctrlr.o 00:02:18.773 CC lib/ftl/ftl_core.o 00:02:18.773 CC lib/ftl/ftl_init.o 00:02:18.773 CC lib/nvmf/ctrlr_bdev.o 00:02:18.773 CC lib/nbd/nbd.o 00:02:18.773 CC lib/ftl/ftl_layout.o 00:02:18.773 CC lib/nvmf/subsystem.o 00:02:18.773 CC lib/ftl/ftl_sb.o 00:02:18.773 CC lib/ftl/ftl_debug.o 00:02:18.773 CC lib/nvmf/nvmf.o 00:02:18.773 CC lib/nbd/nbd_rpc.o 00:02:18.773 CC lib/nvmf/nvmf_rpc.o 00:02:18.773 CC lib/ftl/ftl_io.o 00:02:18.773 CC lib/nvmf/tcp.o 00:02:18.773 CC lib/ftl/ftl_l2p.o 00:02:18.773 CC lib/nvmf/transport.o 00:02:18.773 CC lib/ftl/ftl_l2p_flat.o 00:02:18.773 CC lib/nvmf/vfio_user.o 00:02:18.773 CC lib/nvmf/stubs.o 00:02:18.773 CC lib/ftl/ftl_nv_cache.o 00:02:18.773 CC lib/scsi/dev.o 00:02:18.773 CC lib/nvmf/mdns_server.o 00:02:18.773 CC lib/ftl/ftl_band.o 00:02:18.773 CC lib/scsi/lun.o 00:02:18.773 CC lib/scsi/port.o 00:02:18.773 CC lib/ftl/ftl_band_ops.o 00:02:18.773 CC lib/nvmf/rdma.o 00:02:18.773 CC lib/ublk/ublk.o 00:02:18.773 CC lib/scsi/scsi.o 00:02:18.773 CC lib/ftl/ftl_writer.o 00:02:18.773 CC lib/ublk/ublk_rpc.o 00:02:18.773 CC lib/nvmf/auth.o 00:02:18.773 CC lib/scsi/scsi_bdev.o 00:02:18.773 CC lib/ftl/ftl_rq.o 00:02:18.773 CC lib/scsi/scsi_pr.o 00:02:18.773 CC lib/ftl/ftl_reloc.o 00:02:18.773 CC lib/scsi/scsi_rpc.o 00:02:18.773 CC lib/ftl/ftl_l2p_cache.o 00:02:18.773 CC lib/scsi/task.o 00:02:18.773 CC lib/ftl/ftl_p2l.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:18.773 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:18.773 CC lib/ftl/utils/ftl_md.o 00:02:18.773 CC lib/ftl/utils/ftl_conf.o 00:02:18.773 CC lib/ftl/utils/ftl_mempool.o 00:02:18.773 CC lib/ftl/utils/ftl_bitmap.o 00:02:18.773 CC lib/ftl/utils/ftl_property.o 00:02:18.773 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:18.773 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:18.773 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:18.773 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:18.773 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:18.773 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:18.773 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:18.773 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:18.773 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:18.773 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:18.773 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:18.773 CC lib/ftl/base/ftl_base_bdev.o 00:02:18.773 CC lib/ftl/base/ftl_base_dev.o 00:02:18.773 CC lib/ftl/ftl_trace.o 00:02:19.344 LIB libspdk_nbd.a 00:02:19.344 SO libspdk_nbd.so.7.0 00:02:19.344 LIB libspdk_scsi.a 00:02:19.344 SYMLINK libspdk_nbd.so 00:02:19.344 SO libspdk_scsi.so.9.0 00:02:19.344 LIB libspdk_ublk.a 00:02:19.344 SYMLINK libspdk_scsi.so 00:02:19.605 SO libspdk_ublk.so.3.0 00:02:19.605 SYMLINK libspdk_ublk.so 00:02:19.605 LIB libspdk_ftl.a 00:02:19.867 CC lib/iscsi/conn.o 00:02:19.867 CC lib/iscsi/init_grp.o 00:02:19.867 CC lib/iscsi/iscsi.o 00:02:19.867 CC lib/iscsi/md5.o 00:02:19.867 CC lib/iscsi/param.o 00:02:19.867 CC lib/iscsi/portal_grp.o 00:02:19.867 CC lib/iscsi/tgt_node.o 00:02:19.867 CC lib/iscsi/iscsi_subsystem.o 00:02:19.867 CC lib/iscsi/iscsi_rpc.o 00:02:19.867 CC lib/iscsi/task.o 00:02:19.867 CC lib/vhost/vhost_rpc.o 00:02:19.867 CC lib/vhost/vhost.o 00:02:19.867 CC lib/vhost/vhost_blk.o 00:02:19.867 CC lib/vhost/vhost_scsi.o 00:02:19.867 CC lib/vhost/rte_vhost_user.o 00:02:19.867 SO libspdk_ftl.so.9.0 00:02:20.439 SYMLINK libspdk_ftl.so 00:02:20.701 LIB libspdk_nvmf.a 00:02:20.701 SO libspdk_nvmf.so.19.0 00:02:20.701 LIB libspdk_vhost.a 00:02:20.962 SO libspdk_vhost.so.8.0 00:02:20.962 SYMLINK libspdk_nvmf.so 00:02:20.962 SYMLINK libspdk_vhost.so 00:02:20.962 LIB libspdk_iscsi.a 00:02:20.962 SO libspdk_iscsi.so.8.0 00:02:21.223 SYMLINK libspdk_iscsi.so 00:02:21.831 CC module/env_dpdk/env_dpdk_rpc.o 00:02:21.831 CC module/vfu_device/vfu_virtio.o 00:02:21.831 CC module/vfu_device/vfu_virtio_blk.o 00:02:21.831 CC module/vfu_device/vfu_virtio_scsi.o 00:02:21.831 CC module/vfu_device/vfu_virtio_rpc.o 00:02:21.831 LIB libspdk_env_dpdk_rpc.a 00:02:21.831 CC module/accel/error/accel_error.o 00:02:22.092 CC module/accel/error/accel_error_rpc.o 00:02:22.092 CC module/accel/dsa/accel_dsa.o 00:02:22.092 CC module/accel/dsa/accel_dsa_rpc.o 00:02:22.093 CC module/accel/ioat/accel_ioat.o 00:02:22.093 CC module/accel/ioat/accel_ioat_rpc.o 00:02:22.093 CC module/keyring/linux/keyring_rpc.o 00:02:22.093 CC module/keyring/linux/keyring.o 00:02:22.093 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:22.093 CC module/blob/bdev/blob_bdev.o 00:02:22.093 CC module/accel/iaa/accel_iaa.o 00:02:22.093 CC module/keyring/file/keyring.o 00:02:22.093 CC module/keyring/file/keyring_rpc.o 00:02:22.093 CC module/accel/iaa/accel_iaa_rpc.o 00:02:22.093 CC module/scheduler/gscheduler/gscheduler.o 00:02:22.093 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:22.093 CC module/sock/posix/posix.o 00:02:22.093 SO libspdk_env_dpdk_rpc.so.6.0 00:02:22.093 SYMLINK libspdk_env_dpdk_rpc.so 00:02:22.093 LIB libspdk_keyring_linux.a 00:02:22.093 LIB libspdk_scheduler_gscheduler.a 00:02:22.093 LIB libspdk_keyring_file.a 00:02:22.093 LIB libspdk_scheduler_dpdk_governor.a 00:02:22.093 LIB libspdk_accel_error.a 00:02:22.093 LIB libspdk_accel_ioat.a 00:02:22.093 SO libspdk_keyring_linux.so.1.0 00:02:22.093 SO libspdk_keyring_file.so.1.0 00:02:22.093 LIB libspdk_accel_iaa.a 00:02:22.093 SO libspdk_scheduler_gscheduler.so.4.0 00:02:22.093 LIB libspdk_scheduler_dynamic.a 00:02:22.093 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:22.093 SO libspdk_accel_error.so.2.0 00:02:22.093 SO libspdk_accel_ioat.so.6.0 00:02:22.354 LIB libspdk_accel_dsa.a 00:02:22.354 SO libspdk_accel_iaa.so.3.0 00:02:22.354 SO libspdk_scheduler_dynamic.so.4.0 00:02:22.354 LIB libspdk_blob_bdev.a 00:02:22.354 SYMLINK libspdk_keyring_linux.so 00:02:22.354 SYMLINK libspdk_keyring_file.so 00:02:22.354 SYMLINK libspdk_scheduler_gscheduler.so 00:02:22.354 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:22.354 SO libspdk_accel_dsa.so.5.0 00:02:22.354 SYMLINK libspdk_accel_error.so 00:02:22.354 SO libspdk_blob_bdev.so.11.0 00:02:22.354 SYMLINK libspdk_accel_ioat.so 00:02:22.354 SYMLINK libspdk_accel_iaa.so 00:02:22.354 SYMLINK libspdk_scheduler_dynamic.so 00:02:22.354 SYMLINK libspdk_blob_bdev.so 00:02:22.354 SYMLINK libspdk_accel_dsa.so 00:02:22.354 LIB libspdk_vfu_device.a 00:02:22.354 SO libspdk_vfu_device.so.3.0 00:02:22.617 SYMLINK libspdk_vfu_device.so 00:02:22.617 LIB libspdk_sock_posix.a 00:02:22.617 SO libspdk_sock_posix.so.6.0 00:02:22.878 SYMLINK libspdk_sock_posix.so 00:02:22.878 CC module/bdev/null/bdev_null.o 00:02:22.878 CC module/bdev/null/bdev_null_rpc.o 00:02:22.878 CC module/bdev/raid/bdev_raid.o 00:02:22.878 CC module/bdev/raid/bdev_raid_rpc.o 00:02:22.878 CC module/bdev/raid/bdev_raid_sb.o 00:02:22.878 CC module/bdev/raid/raid1.o 00:02:22.878 CC module/bdev/raid/raid0.o 00:02:22.878 CC module/blobfs/bdev/blobfs_bdev.o 00:02:22.878 CC module/bdev/raid/concat.o 00:02:22.878 CC module/bdev/error/vbdev_error.o 00:02:22.878 CC module/bdev/iscsi/bdev_iscsi.o 00:02:22.878 CC module/bdev/delay/vbdev_delay.o 00:02:22.878 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:22.878 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:22.878 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:22.878 CC module/bdev/passthru/vbdev_passthru.o 00:02:22.878 CC module/bdev/lvol/vbdev_lvol.o 00:02:22.878 CC module/bdev/aio/bdev_aio.o 00:02:22.878 CC module/bdev/gpt/gpt.o 00:02:22.878 CC module/bdev/error/vbdev_error_rpc.o 00:02:22.878 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:22.878 CC module/bdev/gpt/vbdev_gpt.o 00:02:22.878 CC module/bdev/aio/bdev_aio_rpc.o 00:02:22.878 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:22.878 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:22.878 CC module/bdev/ftl/bdev_ftl.o 00:02:22.878 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:22.878 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:22.878 CC module/bdev/malloc/bdev_malloc.o 00:02:22.878 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:22.878 CC module/bdev/nvme/bdev_nvme.o 00:02:22.878 CC module/bdev/split/vbdev_split.o 00:02:22.878 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:22.878 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:22.878 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:22.878 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:22.878 CC module/bdev/split/vbdev_split_rpc.o 00:02:22.878 CC module/bdev/nvme/nvme_rpc.o 00:02:22.878 CC module/bdev/nvme/bdev_mdns_client.o 00:02:22.878 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:22.878 CC module/bdev/nvme/vbdev_opal.o 00:02:22.878 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:23.139 LIB libspdk_blobfs_bdev.a 00:02:23.139 SO libspdk_blobfs_bdev.so.6.0 00:02:23.139 LIB libspdk_bdev_null.a 00:02:23.139 LIB libspdk_bdev_split.a 00:02:23.139 LIB libspdk_bdev_error.a 00:02:23.400 SO libspdk_bdev_null.so.6.0 00:02:23.400 SO libspdk_bdev_split.so.6.0 00:02:23.400 SYMLINK libspdk_blobfs_bdev.so 00:02:23.400 SO libspdk_bdev_error.so.6.0 00:02:23.400 LIB libspdk_bdev_gpt.a 00:02:23.400 LIB libspdk_bdev_passthru.a 00:02:23.400 LIB libspdk_bdev_ftl.a 00:02:23.400 SYMLINK libspdk_bdev_null.so 00:02:23.400 SO libspdk_bdev_gpt.so.6.0 00:02:23.400 SYMLINK libspdk_bdev_split.so 00:02:23.400 LIB libspdk_bdev_aio.a 00:02:23.400 SO libspdk_bdev_passthru.so.6.0 00:02:23.400 SYMLINK libspdk_bdev_error.so 00:02:23.400 LIB libspdk_bdev_zone_block.a 00:02:23.400 SO libspdk_bdev_ftl.so.6.0 00:02:23.400 LIB libspdk_bdev_malloc.a 00:02:23.400 LIB libspdk_bdev_delay.a 00:02:23.400 LIB libspdk_bdev_iscsi.a 00:02:23.400 SO libspdk_bdev_aio.so.6.0 00:02:23.400 SO libspdk_bdev_zone_block.so.6.0 00:02:23.400 SO libspdk_bdev_malloc.so.6.0 00:02:23.400 SYMLINK libspdk_bdev_gpt.so 00:02:23.400 SO libspdk_bdev_delay.so.6.0 00:02:23.400 SYMLINK libspdk_bdev_passthru.so 00:02:23.400 SO libspdk_bdev_iscsi.so.6.0 00:02:23.400 SYMLINK libspdk_bdev_ftl.so 00:02:23.400 SYMLINK libspdk_bdev_aio.so 00:02:23.400 SYMLINK libspdk_bdev_zone_block.so 00:02:23.400 LIB libspdk_bdev_lvol.a 00:02:23.400 SYMLINK libspdk_bdev_delay.so 00:02:23.400 LIB libspdk_bdev_virtio.a 00:02:23.400 SYMLINK libspdk_bdev_malloc.so 00:02:23.400 SYMLINK libspdk_bdev_iscsi.so 00:02:23.661 SO libspdk_bdev_lvol.so.6.0 00:02:23.661 SO libspdk_bdev_virtio.so.6.0 00:02:23.661 SYMLINK libspdk_bdev_lvol.so 00:02:23.661 SYMLINK libspdk_bdev_virtio.so 00:02:23.923 LIB libspdk_bdev_raid.a 00:02:23.923 SO libspdk_bdev_raid.so.6.0 00:02:24.183 SYMLINK libspdk_bdev_raid.so 00:02:24.754 LIB libspdk_bdev_nvme.a 00:02:25.014 SO libspdk_bdev_nvme.so.7.0 00:02:25.014 SYMLINK libspdk_bdev_nvme.so 00:02:25.959 CC module/event/subsystems/sock/sock.o 00:02:25.959 CC module/event/subsystems/iobuf/iobuf.o 00:02:25.959 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:25.959 CC module/event/subsystems/vmd/vmd.o 00:02:25.959 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:25.959 CC module/event/subsystems/scheduler/scheduler.o 00:02:25.959 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:25.959 CC module/event/subsystems/keyring/keyring.o 00:02:25.959 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:25.959 LIB libspdk_event_sock.a 00:02:25.959 LIB libspdk_event_keyring.a 00:02:25.959 LIB libspdk_event_iobuf.a 00:02:25.959 LIB libspdk_event_vmd.a 00:02:25.959 LIB libspdk_event_scheduler.a 00:02:25.959 LIB libspdk_event_vfu_tgt.a 00:02:25.959 SO libspdk_event_sock.so.5.0 00:02:25.959 LIB libspdk_event_vhost_blk.a 00:02:25.959 SO libspdk_event_keyring.so.1.0 00:02:25.959 SO libspdk_event_iobuf.so.3.0 00:02:25.959 SO libspdk_event_scheduler.so.4.0 00:02:25.959 SO libspdk_event_vfu_tgt.so.3.0 00:02:25.959 SO libspdk_event_vmd.so.6.0 00:02:25.959 SO libspdk_event_vhost_blk.so.3.0 00:02:25.959 SYMLINK libspdk_event_sock.so 00:02:25.959 SYMLINK libspdk_event_keyring.so 00:02:25.959 SYMLINK libspdk_event_scheduler.so 00:02:25.959 SYMLINK libspdk_event_vfu_tgt.so 00:02:25.959 SYMLINK libspdk_event_iobuf.so 00:02:25.959 SYMLINK libspdk_event_vhost_blk.so 00:02:25.959 SYMLINK libspdk_event_vmd.so 00:02:26.531 CC module/event/subsystems/accel/accel.o 00:02:26.531 LIB libspdk_event_accel.a 00:02:26.531 SO libspdk_event_accel.so.6.0 00:02:26.791 SYMLINK libspdk_event_accel.so 00:02:27.052 CC module/event/subsystems/bdev/bdev.o 00:02:27.313 LIB libspdk_event_bdev.a 00:02:27.313 SO libspdk_event_bdev.so.6.0 00:02:27.313 SYMLINK libspdk_event_bdev.so 00:02:27.574 CC module/event/subsystems/ublk/ublk.o 00:02:27.574 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:27.574 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:27.574 CC module/event/subsystems/nbd/nbd.o 00:02:27.574 CC module/event/subsystems/scsi/scsi.o 00:02:27.836 LIB libspdk_event_nbd.a 00:02:27.836 LIB libspdk_event_ublk.a 00:02:27.836 LIB libspdk_event_scsi.a 00:02:27.836 SO libspdk_event_nbd.so.6.0 00:02:27.836 SO libspdk_event_ublk.so.3.0 00:02:27.836 SO libspdk_event_scsi.so.6.0 00:02:27.836 LIB libspdk_event_nvmf.a 00:02:27.836 SYMLINK libspdk_event_nbd.so 00:02:27.836 SYMLINK libspdk_event_ublk.so 00:02:27.836 SO libspdk_event_nvmf.so.6.0 00:02:27.836 SYMLINK libspdk_event_scsi.so 00:02:28.097 SYMLINK libspdk_event_nvmf.so 00:02:28.358 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:28.358 CC module/event/subsystems/iscsi/iscsi.o 00:02:28.359 LIB libspdk_event_vhost_scsi.a 00:02:28.359 LIB libspdk_event_iscsi.a 00:02:28.620 SO libspdk_event_vhost_scsi.so.3.0 00:02:28.620 SO libspdk_event_iscsi.so.6.0 00:02:28.620 SYMLINK libspdk_event_vhost_scsi.so 00:02:28.620 SYMLINK libspdk_event_iscsi.so 00:02:28.880 SO libspdk.so.6.0 00:02:28.880 SYMLINK libspdk.so 00:02:29.142 CC app/trace_record/trace_record.o 00:02:29.142 CXX app/trace/trace.o 00:02:29.142 CC app/spdk_lspci/spdk_lspci.o 00:02:29.142 CC app/spdk_nvme_discover/discovery_aer.o 00:02:29.142 CC app/spdk_top/spdk_top.o 00:02:29.142 TEST_HEADER include/spdk/accel.h 00:02:29.142 TEST_HEADER include/spdk/accel_module.h 00:02:29.142 TEST_HEADER include/spdk/assert.h 00:02:29.142 TEST_HEADER include/spdk/bdev.h 00:02:29.142 CC app/spdk_nvme_perf/perf.o 00:02:29.142 TEST_HEADER include/spdk/barrier.h 00:02:29.142 TEST_HEADER include/spdk/base64.h 00:02:29.142 CC app/spdk_nvme_identify/identify.o 00:02:29.142 TEST_HEADER include/spdk/bdev_module.h 00:02:29.142 TEST_HEADER include/spdk/bdev_zone.h 00:02:29.142 TEST_HEADER include/spdk/bit_pool.h 00:02:29.142 CC test/rpc_client/rpc_client_test.o 00:02:29.142 TEST_HEADER include/spdk/bit_array.h 00:02:29.142 TEST_HEADER include/spdk/blob_bdev.h 00:02:29.142 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:29.142 TEST_HEADER include/spdk/blobfs.h 00:02:29.142 TEST_HEADER include/spdk/blob.h 00:02:29.142 TEST_HEADER include/spdk/conf.h 00:02:29.142 TEST_HEADER include/spdk/config.h 00:02:29.142 TEST_HEADER include/spdk/cpuset.h 00:02:29.142 TEST_HEADER include/spdk/crc16.h 00:02:29.142 TEST_HEADER include/spdk/crc32.h 00:02:29.142 TEST_HEADER include/spdk/crc64.h 00:02:29.142 TEST_HEADER include/spdk/dif.h 00:02:29.142 TEST_HEADER include/spdk/dma.h 00:02:29.142 TEST_HEADER include/spdk/endian.h 00:02:29.142 TEST_HEADER include/spdk/env_dpdk.h 00:02:29.142 TEST_HEADER include/spdk/event.h 00:02:29.142 TEST_HEADER include/spdk/env.h 00:02:29.142 TEST_HEADER include/spdk/fd_group.h 00:02:29.142 TEST_HEADER include/spdk/fd.h 00:02:29.142 TEST_HEADER include/spdk/file.h 00:02:29.142 TEST_HEADER include/spdk/ftl.h 00:02:29.142 TEST_HEADER include/spdk/gpt_spec.h 00:02:29.142 TEST_HEADER include/spdk/hexlify.h 00:02:29.142 TEST_HEADER include/spdk/idxd.h 00:02:29.142 TEST_HEADER include/spdk/histogram_data.h 00:02:29.142 TEST_HEADER include/spdk/idxd_spec.h 00:02:29.142 CC app/nvmf_tgt/nvmf_main.o 00:02:29.142 TEST_HEADER include/spdk/init.h 00:02:29.142 TEST_HEADER include/spdk/ioat_spec.h 00:02:29.142 TEST_HEADER include/spdk/ioat.h 00:02:29.142 CC app/iscsi_tgt/iscsi_tgt.o 00:02:29.142 TEST_HEADER include/spdk/iscsi_spec.h 00:02:29.142 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:29.142 TEST_HEADER include/spdk/json.h 00:02:29.142 TEST_HEADER include/spdk/jsonrpc.h 00:02:29.142 CC app/spdk_dd/spdk_dd.o 00:02:29.142 TEST_HEADER include/spdk/keyring.h 00:02:29.142 TEST_HEADER include/spdk/keyring_module.h 00:02:29.142 TEST_HEADER include/spdk/likely.h 00:02:29.142 TEST_HEADER include/spdk/log.h 00:02:29.142 TEST_HEADER include/spdk/lvol.h 00:02:29.142 TEST_HEADER include/spdk/memory.h 00:02:29.142 TEST_HEADER include/spdk/mmio.h 00:02:29.142 TEST_HEADER include/spdk/nbd.h 00:02:29.142 TEST_HEADER include/spdk/net.h 00:02:29.142 TEST_HEADER include/spdk/notify.h 00:02:29.142 TEST_HEADER include/spdk/nvme.h 00:02:29.142 CC app/spdk_tgt/spdk_tgt.o 00:02:29.142 TEST_HEADER include/spdk/nvme_intel.h 00:02:29.142 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:29.142 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:29.142 TEST_HEADER include/spdk/nvme_spec.h 00:02:29.142 TEST_HEADER include/spdk/nvme_zns.h 00:02:29.142 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:29.142 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:29.142 TEST_HEADER include/spdk/nvmf.h 00:02:29.142 TEST_HEADER include/spdk/nvmf_transport.h 00:02:29.142 TEST_HEADER include/spdk/nvmf_spec.h 00:02:29.142 TEST_HEADER include/spdk/opal.h 00:02:29.142 TEST_HEADER include/spdk/pci_ids.h 00:02:29.142 TEST_HEADER include/spdk/opal_spec.h 00:02:29.142 TEST_HEADER include/spdk/pipe.h 00:02:29.142 TEST_HEADER include/spdk/queue.h 00:02:29.142 TEST_HEADER include/spdk/reduce.h 00:02:29.142 TEST_HEADER include/spdk/rpc.h 00:02:29.142 TEST_HEADER include/spdk/scheduler.h 00:02:29.142 TEST_HEADER include/spdk/scsi.h 00:02:29.402 TEST_HEADER include/spdk/scsi_spec.h 00:02:29.402 TEST_HEADER include/spdk/sock.h 00:02:29.402 TEST_HEADER include/spdk/stdinc.h 00:02:29.402 TEST_HEADER include/spdk/string.h 00:02:29.402 TEST_HEADER include/spdk/thread.h 00:02:29.402 TEST_HEADER include/spdk/trace_parser.h 00:02:29.402 TEST_HEADER include/spdk/trace.h 00:02:29.402 TEST_HEADER include/spdk/tree.h 00:02:29.402 TEST_HEADER include/spdk/ublk.h 00:02:29.402 TEST_HEADER include/spdk/util.h 00:02:29.402 TEST_HEADER include/spdk/uuid.h 00:02:29.402 TEST_HEADER include/spdk/version.h 00:02:29.402 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:29.402 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:29.402 TEST_HEADER include/spdk/vhost.h 00:02:29.402 TEST_HEADER include/spdk/vmd.h 00:02:29.402 TEST_HEADER include/spdk/zipf.h 00:02:29.402 TEST_HEADER include/spdk/xor.h 00:02:29.402 CXX test/cpp_headers/accel.o 00:02:29.402 CXX test/cpp_headers/accel_module.o 00:02:29.402 CXX test/cpp_headers/assert.o 00:02:29.402 CXX test/cpp_headers/barrier.o 00:02:29.402 CXX test/cpp_headers/base64.o 00:02:29.402 CXX test/cpp_headers/bdev_module.o 00:02:29.402 CXX test/cpp_headers/bdev.o 00:02:29.402 CXX test/cpp_headers/bdev_zone.o 00:02:29.402 CXX test/cpp_headers/blob_bdev.o 00:02:29.402 CXX test/cpp_headers/bit_array.o 00:02:29.402 CXX test/cpp_headers/bit_pool.o 00:02:29.402 CXX test/cpp_headers/blobfs_bdev.o 00:02:29.402 CXX test/cpp_headers/blobfs.o 00:02:29.402 CXX test/cpp_headers/blob.o 00:02:29.402 CXX test/cpp_headers/conf.o 00:02:29.402 CXX test/cpp_headers/config.o 00:02:29.402 CXX test/cpp_headers/cpuset.o 00:02:29.402 CXX test/cpp_headers/crc16.o 00:02:29.402 CXX test/cpp_headers/crc32.o 00:02:29.402 CXX test/cpp_headers/crc64.o 00:02:29.402 CXX test/cpp_headers/dif.o 00:02:29.402 CXX test/cpp_headers/endian.o 00:02:29.402 CXX test/cpp_headers/dma.o 00:02:29.402 CXX test/cpp_headers/env_dpdk.o 00:02:29.402 CXX test/cpp_headers/env.o 00:02:29.402 CXX test/cpp_headers/event.o 00:02:29.402 CXX test/cpp_headers/fd_group.o 00:02:29.402 CXX test/cpp_headers/fd.o 00:02:29.402 CXX test/cpp_headers/file.o 00:02:29.402 CXX test/cpp_headers/ftl.o 00:02:29.402 CXX test/cpp_headers/gpt_spec.o 00:02:29.402 CXX test/cpp_headers/histogram_data.o 00:02:29.402 CXX test/cpp_headers/hexlify.o 00:02:29.402 CXX test/cpp_headers/idxd_spec.o 00:02:29.402 CXX test/cpp_headers/init.o 00:02:29.402 CXX test/cpp_headers/ioat.o 00:02:29.402 CXX test/cpp_headers/idxd.o 00:02:29.402 CXX test/cpp_headers/ioat_spec.o 00:02:29.402 CXX test/cpp_headers/iscsi_spec.o 00:02:29.402 CXX test/cpp_headers/jsonrpc.o 00:02:29.402 CXX test/cpp_headers/json.o 00:02:29.402 CXX test/cpp_headers/keyring.o 00:02:29.403 CXX test/cpp_headers/keyring_module.o 00:02:29.403 CXX test/cpp_headers/likely.o 00:02:29.403 CXX test/cpp_headers/log.o 00:02:29.403 CXX test/cpp_headers/memory.o 00:02:29.403 CXX test/cpp_headers/lvol.o 00:02:29.403 CXX test/cpp_headers/mmio.o 00:02:29.403 CXX test/cpp_headers/nbd.o 00:02:29.403 CXX test/cpp_headers/notify.o 00:02:29.403 CXX test/cpp_headers/net.o 00:02:29.403 CXX test/cpp_headers/nvme_ocssd.o 00:02:29.403 CXX test/cpp_headers/nvme_intel.o 00:02:29.403 CXX test/cpp_headers/nvme.o 00:02:29.403 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:29.403 CXX test/cpp_headers/nvme_spec.o 00:02:29.403 CXX test/cpp_headers/nvmf_cmd.o 00:02:29.403 CXX test/cpp_headers/nvme_zns.o 00:02:29.403 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:29.403 CXX test/cpp_headers/opal.o 00:02:29.403 CXX test/cpp_headers/nvmf_spec.o 00:02:29.403 CXX test/cpp_headers/nvmf.o 00:02:29.403 CXX test/cpp_headers/nvmf_transport.o 00:02:29.403 CXX test/cpp_headers/opal_spec.o 00:02:29.403 CXX test/cpp_headers/pipe.o 00:02:29.403 LINK spdk_lspci 00:02:29.403 CXX test/cpp_headers/pci_ids.o 00:02:29.403 CXX test/cpp_headers/queue.o 00:02:29.403 CXX test/cpp_headers/reduce.o 00:02:29.403 CXX test/cpp_headers/scsi.o 00:02:29.403 CXX test/cpp_headers/rpc.o 00:02:29.403 CXX test/cpp_headers/scheduler.o 00:02:29.403 CXX test/cpp_headers/sock.o 00:02:29.403 CXX test/cpp_headers/stdinc.o 00:02:29.403 CXX test/cpp_headers/scsi_spec.o 00:02:29.403 CXX test/cpp_headers/string.o 00:02:29.403 CXX test/cpp_headers/thread.o 00:02:29.403 CXX test/cpp_headers/tree.o 00:02:29.403 CXX test/cpp_headers/ublk.o 00:02:29.403 CC test/env/vtophys/vtophys.o 00:02:29.403 CC test/env/memory/memory_ut.o 00:02:29.403 CXX test/cpp_headers/trace.o 00:02:29.403 CXX test/cpp_headers/trace_parser.o 00:02:29.403 CXX test/cpp_headers/util.o 00:02:29.403 CC test/app/stub/stub.o 00:02:29.403 CXX test/cpp_headers/uuid.o 00:02:29.403 CC test/env/pci/pci_ut.o 00:02:29.403 CC examples/util/zipf/zipf.o 00:02:29.403 CXX test/cpp_headers/version.o 00:02:29.403 CXX test/cpp_headers/vfio_user_pci.o 00:02:29.403 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:29.403 CXX test/cpp_headers/vfio_user_spec.o 00:02:29.403 CXX test/cpp_headers/vmd.o 00:02:29.403 CXX test/cpp_headers/xor.o 00:02:29.403 CXX test/cpp_headers/vhost.o 00:02:29.403 CXX test/cpp_headers/zipf.o 00:02:29.403 CC test/app/histogram_perf/histogram_perf.o 00:02:29.403 CC app/fio/nvme/fio_plugin.o 00:02:29.403 CC test/thread/poller_perf/poller_perf.o 00:02:29.403 CC examples/ioat/verify/verify.o 00:02:29.403 CC test/app/jsoncat/jsoncat.o 00:02:29.403 CC examples/ioat/perf/perf.o 00:02:29.403 LINK spdk_nvme_discover 00:02:29.403 LINK rpc_client_test 00:02:29.403 LINK nvmf_tgt 00:02:29.664 CC app/fio/bdev/fio_plugin.o 00:02:29.664 CC test/dma/test_dma/test_dma.o 00:02:29.664 CC test/app/bdev_svc/bdev_svc.o 00:02:29.664 LINK spdk_trace_record 00:02:29.664 LINK iscsi_tgt 00:02:29.664 LINK interrupt_tgt 00:02:29.664 LINK spdk_tgt 00:02:29.664 CC test/env/mem_callbacks/mem_callbacks.o 00:02:29.664 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:29.664 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:29.923 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:29.923 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:29.923 LINK spdk_trace 00:02:29.923 LINK histogram_perf 00:02:29.923 LINK stub 00:02:29.923 LINK spdk_dd 00:02:29.923 LINK poller_perf 00:02:29.923 LINK bdev_svc 00:02:29.923 LINK vtophys 00:02:29.923 LINK jsoncat 00:02:29.923 LINK zipf 00:02:29.923 LINK env_dpdk_post_init 00:02:30.183 LINK verify 00:02:30.183 LINK ioat_perf 00:02:30.183 LINK pci_ut 00:02:30.183 LINK spdk_nvme_perf 00:02:30.183 LINK spdk_nvme 00:02:30.183 CC app/vhost/vhost.o 00:02:30.444 LINK nvme_fuzz 00:02:30.444 LINK test_dma 00:02:30.444 LINK spdk_bdev 00:02:30.444 LINK vhost_fuzz 00:02:30.444 LINK vhost 00:02:30.444 LINK spdk_nvme_identify 00:02:30.444 CC test/event/event_perf/event_perf.o 00:02:30.444 LINK spdk_top 00:02:30.444 LINK mem_callbacks 00:02:30.444 CC test/event/reactor_perf/reactor_perf.o 00:02:30.444 CC test/event/reactor/reactor.o 00:02:30.444 CC examples/sock/hello_world/hello_sock.o 00:02:30.444 CC test/event/app_repeat/app_repeat.o 00:02:30.444 CC examples/vmd/lsvmd/lsvmd.o 00:02:30.444 CC examples/idxd/perf/perf.o 00:02:30.444 CC examples/vmd/led/led.o 00:02:30.444 CC test/event/scheduler/scheduler.o 00:02:30.444 CC examples/thread/thread/thread_ex.o 00:02:30.704 LINK event_perf 00:02:30.704 LINK reactor 00:02:30.704 LINK reactor_perf 00:02:30.704 LINK app_repeat 00:02:30.704 LINK lsvmd 00:02:30.704 LINK led 00:02:30.704 LINK hello_sock 00:02:30.704 LINK scheduler 00:02:30.965 LINK idxd_perf 00:02:30.965 LINK thread 00:02:30.965 LINK memory_ut 00:02:30.965 CC test/nvme/reset/reset.o 00:02:30.965 CC test/nvme/sgl/sgl.o 00:02:30.965 CC test/nvme/e2edp/nvme_dp.o 00:02:30.965 CC test/nvme/aer/aer.o 00:02:30.965 CC test/nvme/simple_copy/simple_copy.o 00:02:30.965 CC test/accel/dif/dif.o 00:02:30.965 CC test/nvme/overhead/overhead.o 00:02:30.965 CC test/nvme/fused_ordering/fused_ordering.o 00:02:30.965 CC test/blobfs/mkfs/mkfs.o 00:02:30.965 CC test/nvme/fdp/fdp.o 00:02:30.965 CC test/nvme/compliance/nvme_compliance.o 00:02:30.965 CC test/nvme/err_injection/err_injection.o 00:02:30.965 CC test/nvme/boot_partition/boot_partition.o 00:02:30.965 CC test/nvme/connect_stress/connect_stress.o 00:02:30.965 CC test/nvme/cuse/cuse.o 00:02:30.965 CC test/nvme/reserve/reserve.o 00:02:30.965 CC test/nvme/startup/startup.o 00:02:30.965 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:30.965 CC test/lvol/esnap/esnap.o 00:02:31.225 LINK startup 00:02:31.225 LINK connect_stress 00:02:31.225 LINK boot_partition 00:02:31.225 LINK simple_copy 00:02:31.225 LINK fused_ordering 00:02:31.225 LINK err_injection 00:02:31.225 LINK doorbell_aers 00:02:31.225 LINK reserve 00:02:31.225 LINK reset 00:02:31.225 LINK mkfs 00:02:31.225 LINK nvme_dp 00:02:31.225 LINK sgl 00:02:31.225 LINK aer 00:02:31.225 LINK overhead 00:02:31.225 LINK nvme_compliance 00:02:31.225 LINK fdp 00:02:31.225 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:31.225 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:31.225 CC examples/nvme/reconnect/reconnect.o 00:02:31.225 CC examples/nvme/hello_world/hello_world.o 00:02:31.225 CC examples/nvme/arbitration/arbitration.o 00:02:31.225 CC examples/nvme/hotplug/hotplug.o 00:02:31.225 CC examples/nvme/abort/abort.o 00:02:31.225 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:31.225 LINK iscsi_fuzz 00:02:31.486 LINK dif 00:02:31.486 CC examples/accel/perf/accel_perf.o 00:02:31.486 CC examples/blob/cli/blobcli.o 00:02:31.486 CC examples/blob/hello_world/hello_blob.o 00:02:31.486 LINK cmb_copy 00:02:31.486 LINK pmr_persistence 00:02:31.486 LINK hello_world 00:02:31.486 LINK hotplug 00:02:31.748 LINK arbitration 00:02:31.748 LINK reconnect 00:02:31.748 LINK abort 00:02:31.748 LINK hello_blob 00:02:31.748 LINK nvme_manage 00:02:31.748 LINK accel_perf 00:02:32.009 CC test/bdev/bdevio/bdevio.o 00:02:32.009 LINK blobcli 00:02:32.009 LINK cuse 00:02:32.271 LINK bdevio 00:02:32.533 CC examples/bdev/hello_world/hello_bdev.o 00:02:32.533 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.795 LINK hello_bdev 00:02:33.056 LINK bdevperf 00:02:34.001 CC examples/nvmf/nvmf/nvmf.o 00:02:34.001 LINK nvmf 00:02:35.389 LINK esnap 00:02:35.650 00:02:35.650 real 0m51.845s 00:02:35.650 user 6m32.338s 00:02:35.650 sys 4m17.507s 00:02:35.650 14:58:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:35.650 14:58:27 make -- common/autotest_common.sh@10 -- $ set +x 00:02:35.650 ************************************ 00:02:35.650 END TEST make 00:02:35.650 ************************************ 00:02:35.650 14:58:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:35.650 14:58:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:35.650 14:58:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:35.650 14:58:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.650 14:58:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:35.650 14:58:27 -- pm/common@44 -- $ pid=4105031 00:02:35.650 14:58:27 -- pm/common@50 -- $ kill -TERM 4105031 00:02:35.650 14:58:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.650 14:58:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:35.650 14:58:27 -- pm/common@44 -- $ pid=4105032 00:02:35.650 14:58:27 -- pm/common@50 -- $ kill -TERM 4105032 00:02:35.650 14:58:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.650 14:58:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:35.650 14:58:27 -- pm/common@44 -- $ pid=4105034 00:02:35.650 14:58:27 -- pm/common@50 -- $ kill -TERM 4105034 00:02:35.650 14:58:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.650 14:58:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:35.650 14:58:27 -- pm/common@44 -- $ pid=4105057 00:02:35.650 14:58:27 -- pm/common@50 -- $ sudo -E kill -TERM 4105057 00:02:35.912 14:58:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:35.912 14:58:27 -- nvmf/common.sh@7 -- # uname -s 00:02:35.912 14:58:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:35.912 14:58:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:35.912 14:58:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:35.912 14:58:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:35.912 14:58:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:35.912 14:58:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:35.912 14:58:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:35.912 14:58:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:35.912 14:58:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:35.912 14:58:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:35.912 14:58:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:35.912 14:58:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:35.912 14:58:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:35.912 14:58:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:35.912 14:58:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:35.912 14:58:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:35.912 14:58:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:35.912 14:58:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:35.912 14:58:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.912 14:58:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.912 14:58:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.912 14:58:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.912 14:58:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.912 14:58:27 -- paths/export.sh@5 -- # export PATH 00:02:35.912 14:58:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.912 14:58:27 -- nvmf/common.sh@47 -- # : 0 00:02:35.912 14:58:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:35.912 14:58:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:35.912 14:58:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:35.912 14:58:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:35.912 14:58:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:35.912 14:58:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:35.912 14:58:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:35.912 14:58:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:35.912 14:58:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:35.912 14:58:27 -- spdk/autotest.sh@32 -- # uname -s 00:02:35.913 14:58:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:35.913 14:58:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:35.913 14:58:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.913 14:58:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:35.913 14:58:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.913 14:58:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:35.913 14:58:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:35.913 14:58:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:35.913 14:58:28 -- spdk/autotest.sh@48 -- # udevadm_pid=4168664 00:02:35.913 14:58:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:35.913 14:58:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:35.913 14:58:28 -- pm/common@17 -- # local monitor 00:02:35.913 14:58:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.913 14:58:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.913 14:58:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.913 14:58:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.913 14:58:28 -- pm/common@21 -- # date +%s 00:02:35.913 14:58:28 -- pm/common@21 -- # date +%s 00:02:35.913 14:58:28 -- pm/common@25 -- # sleep 1 00:02:35.913 14:58:28 -- pm/common@21 -- # date +%s 00:02:35.913 14:58:28 -- pm/common@21 -- # date +%s 00:02:35.913 14:58:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721912308 00:02:35.913 14:58:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721912308 00:02:35.913 14:58:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721912308 00:02:35.913 14:58:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721912308 00:02:35.913 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721912308_collect-vmstat.pm.log 00:02:35.913 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721912308_collect-cpu-load.pm.log 00:02:35.913 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721912308_collect-cpu-temp.pm.log 00:02:35.913 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721912308_collect-bmc-pm.bmc.pm.log 00:02:36.856 14:58:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:36.856 14:58:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:36.856 14:58:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:36.856 14:58:29 -- common/autotest_common.sh@10 -- # set +x 00:02:36.856 14:58:29 -- spdk/autotest.sh@59 -- # create_test_list 00:02:36.856 14:58:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:36.856 14:58:29 -- common/autotest_common.sh@10 -- # set +x 00:02:37.116 14:58:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:37.117 14:58:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.117 14:58:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.117 14:58:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:37.117 14:58:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.117 14:58:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:37.117 14:58:29 -- common/autotest_common.sh@1455 -- # uname 00:02:37.117 14:58:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:37.117 14:58:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:37.117 14:58:29 -- common/autotest_common.sh@1475 -- # uname 00:02:37.117 14:58:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:37.117 14:58:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:37.117 14:58:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:37.117 14:58:29 -- spdk/autotest.sh@72 -- # hash lcov 00:02:37.117 14:58:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:37.117 14:58:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:37.117 --rc lcov_branch_coverage=1 00:02:37.117 --rc lcov_function_coverage=1 00:02:37.117 --rc genhtml_branch_coverage=1 00:02:37.117 --rc genhtml_function_coverage=1 00:02:37.117 --rc genhtml_legend=1 00:02:37.117 --rc geninfo_all_blocks=1 00:02:37.117 ' 00:02:37.117 14:58:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:37.117 --rc lcov_branch_coverage=1 00:02:37.117 --rc lcov_function_coverage=1 00:02:37.117 --rc genhtml_branch_coverage=1 00:02:37.117 --rc genhtml_function_coverage=1 00:02:37.117 --rc genhtml_legend=1 00:02:37.117 --rc geninfo_all_blocks=1 00:02:37.117 ' 00:02:37.117 14:58:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:37.117 --rc lcov_branch_coverage=1 00:02:37.117 --rc lcov_function_coverage=1 00:02:37.117 --rc genhtml_branch_coverage=1 00:02:37.117 --rc genhtml_function_coverage=1 00:02:37.117 --rc genhtml_legend=1 00:02:37.117 --rc geninfo_all_blocks=1 00:02:37.117 --no-external' 00:02:37.117 14:58:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:37.117 --rc lcov_branch_coverage=1 00:02:37.117 --rc lcov_function_coverage=1 00:02:37.117 --rc genhtml_branch_coverage=1 00:02:37.117 --rc genhtml_function_coverage=1 00:02:37.117 --rc genhtml_legend=1 00:02:37.117 --rc geninfo_all_blocks=1 00:02:37.117 --no-external' 00:02:37.117 14:58:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:37.117 lcov: LCOV version 1.14 00:02:37.117 14:58:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:52.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:52.090 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:02.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:02.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:02.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:02.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:02.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:02.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:02.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:02.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:02.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:02.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:02.354 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:02.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:02.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:02.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:02.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:02.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:02.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:02.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:02.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:02.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:02.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:02.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:02.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:02.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:02.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:02.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:02.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:02.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:02.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:02.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:02.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:02.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:02.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:02.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:06.180 14:58:58 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:06.180 14:58:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:06.180 14:58:58 -- common/autotest_common.sh@10 -- # set +x 00:03:06.180 14:58:58 -- spdk/autotest.sh@91 -- # rm -f 00:03:06.180 14:58:58 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.482 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:09.482 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:09.482 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:09.742 14:59:01 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:09.742 14:59:01 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:09.742 14:59:01 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:09.742 14:59:01 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:09.742 14:59:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:09.742 14:59:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:09.742 14:59:01 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:09.742 14:59:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.742 14:59:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:09.742 14:59:01 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:09.742 14:59:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:09.742 14:59:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:09.742 14:59:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:09.742 14:59:01 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:09.742 14:59:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:09.742 No valid GPT data, bailing 00:03:09.743 14:59:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:09.743 14:59:01 -- scripts/common.sh@391 -- # pt= 00:03:09.743 14:59:01 -- scripts/common.sh@392 -- # return 1 00:03:09.743 14:59:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:10.003 1+0 records in 00:03:10.003 1+0 records out 00:03:10.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492798 s, 213 MB/s 00:03:10.003 14:59:01 -- spdk/autotest.sh@118 -- # sync 00:03:10.003 14:59:01 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:10.003 14:59:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:10.003 14:59:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:18.145 14:59:09 -- spdk/autotest.sh@124 -- # uname -s 00:03:18.145 14:59:09 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:18.145 14:59:09 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.145 14:59:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:18.145 14:59:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:18.145 14:59:09 -- common/autotest_common.sh@10 -- # set +x 00:03:18.145 ************************************ 00:03:18.145 START TEST setup.sh 00:03:18.145 ************************************ 00:03:18.145 14:59:09 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.145 * Looking for test storage... 00:03:18.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.145 14:59:09 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:18.145 14:59:10 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:18.145 14:59:10 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:18.145 14:59:10 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:18.145 14:59:10 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:18.145 14:59:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.145 ************************************ 00:03:18.145 START TEST acl 00:03:18.145 ************************************ 00:03:18.145 14:59:10 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:18.145 * Looking for test storage... 00:03:18.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.145 14:59:10 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:18.145 14:59:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:18.146 14:59:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:18.146 14:59:10 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:18.146 14:59:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:18.146 14:59:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:18.146 14:59:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:18.146 14:59:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.146 14:59:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:18.146 14:59:10 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:18.146 14:59:10 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:18.146 14:59:10 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:18.146 14:59:10 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:18.146 14:59:10 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:18.146 14:59:10 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.146 14:59:10 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.353 14:59:14 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:22.353 14:59:14 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:22.353 14:59:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.353 14:59:14 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:22.353 14:59:14 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.353 14:59:14 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:25.725 Hugepages 00:03:25.725 node hugesize free / total 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 00:03:25.725 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:25.725 14:59:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:25.725 14:59:17 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.725 14:59:17 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.725 14:59:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.725 ************************************ 00:03:25.725 START TEST denied 00:03:25.725 ************************************ 00:03:25.725 14:59:17 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:25.725 14:59:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:25.725 14:59:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:25.725 14:59:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:25.725 14:59:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.725 14:59:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.934 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.934 14:59:21 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.142 00:03:34.142 real 0m8.519s 00:03:34.142 user 0m2.847s 00:03:34.142 sys 0m4.955s 00:03:34.142 14:59:26 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.142 14:59:26 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:34.142 ************************************ 00:03:34.142 END TEST denied 00:03:34.142 ************************************ 00:03:34.142 14:59:26 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:34.142 14:59:26 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.142 14:59:26 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.142 14:59:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:34.142 ************************************ 00:03:34.142 START TEST allowed 00:03:34.142 ************************************ 00:03:34.142 14:59:26 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:34.142 14:59:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:34.142 14:59:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:34.142 14:59:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.142 14:59:26 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.142 14:59:26 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:40.734 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:40.734 14:59:31 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:40.734 14:59:31 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:40.734 14:59:31 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:40.734 14:59:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.734 14:59:31 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.040 00:03:44.040 real 0m9.459s 00:03:44.040 user 0m2.889s 00:03:44.040 sys 0m4.843s 00:03:44.040 14:59:35 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.040 14:59:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:44.040 ************************************ 00:03:44.040 END TEST allowed 00:03:44.040 ************************************ 00:03:44.040 00:03:44.040 real 0m25.748s 00:03:44.040 user 0m8.638s 00:03:44.040 sys 0m14.863s 00:03:44.040 14:59:35 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.040 14:59:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.040 ************************************ 00:03:44.040 END TEST acl 00:03:44.040 ************************************ 00:03:44.040 14:59:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:44.040 14:59:35 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.040 14:59:35 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.040 14:59:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.040 ************************************ 00:03:44.040 START TEST hugepages 00:03:44.040 ************************************ 00:03:44.040 14:59:35 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:44.040 * Looking for test storage... 00:03:44.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102345956 kB' 'MemAvailable: 106063596 kB' 'Buffers: 2704 kB' 'Cached: 14775964 kB' 'SwapCached: 0 kB' 'Active: 11621764 kB' 'Inactive: 3693560 kB' 'Active(anon): 11141964 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540036 kB' 'Mapped: 199776 kB' 'Shmem: 10605308 kB' 'KReclaimable: 585224 kB' 'Slab: 1471144 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 885920 kB' 'KernelStack: 27248 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460872 kB' 'Committed_AS: 12717792 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.040 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.041 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.042 14:59:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.042 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.042 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.042 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.042 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.042 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.042 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:44.042 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:44.042 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:44.042 14:59:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.042 14:59:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.042 14:59:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.042 ************************************ 00:03:44.042 START TEST default_setup 00:03:44.042 ************************************ 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.042 14:59:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.346 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:47.346 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104557800 kB' 'MemAvailable: 108275440 kB' 'Buffers: 2704 kB' 'Cached: 14776084 kB' 'SwapCached: 0 kB' 'Active: 11638960 kB' 'Inactive: 3693560 kB' 'Active(anon): 11159160 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557100 kB' 'Mapped: 200012 kB' 'Shmem: 10605428 kB' 'KReclaimable: 585224 kB' 'Slab: 1469280 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 884056 kB' 'KernelStack: 27184 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12735156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235720 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.924 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.925 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104559816 kB' 'MemAvailable: 108277456 kB' 'Buffers: 2704 kB' 'Cached: 14776088 kB' 'SwapCached: 0 kB' 'Active: 11638796 kB' 'Inactive: 3693560 kB' 'Active(anon): 11158996 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556892 kB' 'Mapped: 199952 kB' 'Shmem: 10605432 kB' 'KReclaimable: 585224 kB' 'Slab: 1469280 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 884056 kB' 'KernelStack: 27152 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12735312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235688 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.926 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.927 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.928 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104560112 kB' 'MemAvailable: 108277752 kB' 'Buffers: 2704 kB' 'Cached: 14776104 kB' 'SwapCached: 0 kB' 'Active: 11638776 kB' 'Inactive: 3693560 kB' 'Active(anon): 11158976 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556904 kB' 'Mapped: 199952 kB' 'Shmem: 10605448 kB' 'KReclaimable: 585224 kB' 'Slab: 1469276 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 884052 kB' 'KernelStack: 27168 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12735336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235688 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.929 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.930 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.931 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.932 nr_hugepages=1024 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.932 resv_hugepages=0 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.932 surplus_hugepages=0 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.932 anon_hugepages=0 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104559588 kB' 'MemAvailable: 108277228 kB' 'Buffers: 2704 kB' 'Cached: 14776144 kB' 'SwapCached: 0 kB' 'Active: 11639096 kB' 'Inactive: 3693560 kB' 'Active(anon): 11159296 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557228 kB' 'Mapped: 199952 kB' 'Shmem: 10605488 kB' 'KReclaimable: 585224 kB' 'Slab: 1469276 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 884052 kB' 'KernelStack: 27184 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12735728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235688 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.932 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.933 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58140132 kB' 'MemUsed: 7518876 kB' 'SwapCached: 0 kB' 'Active: 2533240 kB' 'Inactive: 237284 kB' 'Active(anon): 2293816 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2543668 kB' 'Mapped: 85784 kB' 'AnonPages: 230096 kB' 'Shmem: 2066960 kB' 'KernelStack: 15496 kB' 'PageTables: 5380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271004 kB' 'Slab: 788236 kB' 'SReclaimable: 271004 kB' 'SUnreclaim: 517232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.934 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.936 node0=1024 expecting 1024 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.936 00:03:47.936 real 0m3.942s 00:03:47.936 user 0m1.520s 00:03:47.936 sys 0m2.412s 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.936 14:59:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:47.936 ************************************ 00:03:47.936 END TEST default_setup 00:03:47.936 ************************************ 00:03:47.936 14:59:40 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:47.936 14:59:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.936 14:59:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.936 14:59:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.936 ************************************ 00:03:47.936 START TEST per_node_1G_alloc 00:03:47.936 ************************************ 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.936 14:59:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.244 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.244 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.244 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.505 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:51.505 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:51.505 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.505 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.505 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.505 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.505 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104598008 kB' 'MemAvailable: 108315648 kB' 'Buffers: 2704 kB' 'Cached: 14776260 kB' 'SwapCached: 0 kB' 'Active: 11637652 kB' 'Inactive: 3693560 kB' 'Active(anon): 11157852 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555552 kB' 'Mapped: 198936 kB' 'Shmem: 10605604 kB' 'KReclaimable: 585224 kB' 'Slab: 1468808 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883584 kB' 'KernelStack: 27344 kB' 'PageTables: 9348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12727728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236216 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.506 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.773 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.773 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.773 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.773 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.773 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.773 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.773 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.773 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.773 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.774 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104601032 kB' 'MemAvailable: 108318672 kB' 'Buffers: 2704 kB' 'Cached: 14776268 kB' 'SwapCached: 0 kB' 'Active: 11638340 kB' 'Inactive: 3693560 kB' 'Active(anon): 11158540 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556248 kB' 'Mapped: 198892 kB' 'Shmem: 10605612 kB' 'KReclaimable: 585224 kB' 'Slab: 1468808 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883584 kB' 'KernelStack: 27488 kB' 'PageTables: 9544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12727748 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236136 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.775 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104600920 kB' 'MemAvailable: 108318560 kB' 'Buffers: 2704 kB' 'Cached: 14776280 kB' 'SwapCached: 0 kB' 'Active: 11639348 kB' 'Inactive: 3693560 kB' 'Active(anon): 11159548 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557228 kB' 'Mapped: 199288 kB' 'Shmem: 10605624 kB' 'KReclaimable: 585224 kB' 'Slab: 1468560 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883336 kB' 'KernelStack: 27408 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12730052 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236136 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.780 nr_hugepages=1024 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.780 resv_hugepages=0 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.780 surplus_hugepages=0 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.780 anon_hugepages=0 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104594020 kB' 'MemAvailable: 108311660 kB' 'Buffers: 2704 kB' 'Cached: 14776304 kB' 'SwapCached: 0 kB' 'Active: 11642436 kB' 'Inactive: 3693560 kB' 'Active(anon): 11162636 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560324 kB' 'Mapped: 199296 kB' 'Shmem: 10605648 kB' 'KReclaimable: 585224 kB' 'Slab: 1468516 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883292 kB' 'KernelStack: 27456 kB' 'PageTables: 9348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12733912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236092 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.780 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.781 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59205260 kB' 'MemUsed: 6453748 kB' 'SwapCached: 0 kB' 'Active: 2530924 kB' 'Inactive: 237284 kB' 'Active(anon): 2291500 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2543740 kB' 'Mapped: 85024 kB' 'AnonPages: 227648 kB' 'Shmem: 2067032 kB' 'KernelStack: 15800 kB' 'PageTables: 5908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271004 kB' 'Slab: 788232 kB' 'SReclaimable: 271004 kB' 'SUnreclaim: 517228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45386960 kB' 'MemUsed: 15292876 kB' 'SwapCached: 0 kB' 'Active: 9106816 kB' 'Inactive: 3456276 kB' 'Active(anon): 8866440 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3456276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12235308 kB' 'Mapped: 114020 kB' 'AnonPages: 327896 kB' 'Shmem: 8538656 kB' 'KernelStack: 11640 kB' 'PageTables: 3356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314220 kB' 'Slab: 680760 kB' 'SReclaimable: 314220 kB' 'SUnreclaim: 366540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.784 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.785 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:51.786 node0=512 expecting 512 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:51.786 node1=512 expecting 512 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:51.786 00:03:51.786 real 0m3.800s 00:03:51.786 user 0m1.539s 00:03:51.786 sys 0m2.314s 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.786 14:59:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:51.786 ************************************ 00:03:51.786 END TEST per_node_1G_alloc 00:03:51.786 ************************************ 00:03:51.786 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:51.786 14:59:43 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.786 14:59:43 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.786 14:59:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.786 ************************************ 00:03:51.786 START TEST even_2G_alloc 00:03:51.786 ************************************ 00:03:51.786 14:59:43 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:51.786 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:51.786 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.047 14:59:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.369 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.369 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.369 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.370 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.370 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104619208 kB' 'MemAvailable: 108336848 kB' 'Buffers: 2704 kB' 'Cached: 14776444 kB' 'SwapCached: 0 kB' 'Active: 11638900 kB' 'Inactive: 3693560 kB' 'Active(anon): 11159100 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556148 kB' 'Mapped: 199044 kB' 'Shmem: 10605788 kB' 'KReclaimable: 585224 kB' 'Slab: 1468844 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883620 kB' 'KernelStack: 27168 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12725680 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236088 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.694 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.695 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104619784 kB' 'MemAvailable: 108337424 kB' 'Buffers: 2704 kB' 'Cached: 14776448 kB' 'SwapCached: 0 kB' 'Active: 11638364 kB' 'Inactive: 3693560 kB' 'Active(anon): 11158564 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555612 kB' 'Mapped: 198972 kB' 'Shmem: 10605792 kB' 'KReclaimable: 585224 kB' 'Slab: 1468820 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883596 kB' 'KernelStack: 27136 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12726444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236056 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.696 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.697 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104619072 kB' 'MemAvailable: 108336712 kB' 'Buffers: 2704 kB' 'Cached: 14776464 kB' 'SwapCached: 0 kB' 'Active: 11637752 kB' 'Inactive: 3693560 kB' 'Active(anon): 11157952 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555576 kB' 'Mapped: 198956 kB' 'Shmem: 10605808 kB' 'KReclaimable: 585224 kB' 'Slab: 1468820 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883596 kB' 'KernelStack: 27168 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12725704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.698 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.699 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.699 nr_hugepages=1024 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.700 resv_hugepages=0 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.700 surplus_hugepages=0 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.700 anon_hugepages=0 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104618848 kB' 'MemAvailable: 108336488 kB' 'Buffers: 2704 kB' 'Cached: 14776504 kB' 'SwapCached: 0 kB' 'Active: 11637152 kB' 'Inactive: 3693560 kB' 'Active(anon): 11157352 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554820 kB' 'Mapped: 198896 kB' 'Shmem: 10605848 kB' 'KReclaimable: 585224 kB' 'Slab: 1468820 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883596 kB' 'KernelStack: 27088 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12725728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.700 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.701 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59222896 kB' 'MemUsed: 6436112 kB' 'SwapCached: 0 kB' 'Active: 2529392 kB' 'Inactive: 237284 kB' 'Active(anon): 2289968 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2543856 kB' 'Mapped: 84856 kB' 'AnonPages: 225932 kB' 'Shmem: 2067148 kB' 'KernelStack: 15400 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271004 kB' 'Slab: 787692 kB' 'SReclaimable: 271004 kB' 'SUnreclaim: 516688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.702 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45396316 kB' 'MemUsed: 15283520 kB' 'SwapCached: 0 kB' 'Active: 9107812 kB' 'Inactive: 3456276 kB' 'Active(anon): 8867436 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3456276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12235376 kB' 'Mapped: 114040 kB' 'AnonPages: 328924 kB' 'Shmem: 8538724 kB' 'KernelStack: 11704 kB' 'PageTables: 3588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314220 kB' 'Slab: 681128 kB' 'SReclaimable: 314220 kB' 'SUnreclaim: 366908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.703 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.704 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.705 node0=512 expecting 512 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:55.705 node1=512 expecting 512 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:55.705 00:03:55.705 real 0m3.782s 00:03:55.705 user 0m1.527s 00:03:55.705 sys 0m2.293s 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.705 14:59:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.705 ************************************ 00:03:55.705 END TEST even_2G_alloc 00:03:55.705 ************************************ 00:03:55.705 14:59:47 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:55.705 14:59:47 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.705 14:59:47 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.705 14:59:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.705 ************************************ 00:03:55.705 START TEST odd_alloc 00:03:55.705 ************************************ 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.705 14:59:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.006 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.006 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.006 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104612776 kB' 'MemAvailable: 108330416 kB' 'Buffers: 2704 kB' 'Cached: 14776620 kB' 'SwapCached: 0 kB' 'Active: 11639416 kB' 'Inactive: 3693560 kB' 'Active(anon): 11159616 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556944 kB' 'Mapped: 199304 kB' 'Shmem: 10605964 kB' 'KReclaimable: 585224 kB' 'Slab: 1468084 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 882860 kB' 'KernelStack: 27104 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12726788 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235896 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.267 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.268 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.269 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104614240 kB' 'MemAvailable: 108331880 kB' 'Buffers: 2704 kB' 'Cached: 14776624 kB' 'SwapCached: 0 kB' 'Active: 11639100 kB' 'Inactive: 3693560 kB' 'Active(anon): 11159300 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556752 kB' 'Mapped: 198932 kB' 'Shmem: 10605968 kB' 'KReclaimable: 585224 kB' 'Slab: 1468164 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 882940 kB' 'KernelStack: 27136 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12726808 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.534 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.535 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104614596 kB' 'MemAvailable: 108332236 kB' 'Buffers: 2704 kB' 'Cached: 14776640 kB' 'SwapCached: 0 kB' 'Active: 11639104 kB' 'Inactive: 3693560 kB' 'Active(anon): 11159304 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556752 kB' 'Mapped: 198932 kB' 'Shmem: 10605984 kB' 'KReclaimable: 585224 kB' 'Slab: 1468164 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 882940 kB' 'KernelStack: 27136 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12726828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.536 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.537 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:59.538 nr_hugepages=1025 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.538 resv_hugepages=0 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.538 surplus_hugepages=0 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.538 anon_hugepages=0 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104615100 kB' 'MemAvailable: 108332740 kB' 'Buffers: 2704 kB' 'Cached: 14776676 kB' 'SwapCached: 0 kB' 'Active: 11639500 kB' 'Inactive: 3693560 kB' 'Active(anon): 11159700 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557072 kB' 'Mapped: 198932 kB' 'Shmem: 10606020 kB' 'KReclaimable: 585224 kB' 'Slab: 1468164 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 882940 kB' 'KernelStack: 27136 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12726848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.538 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.539 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.540 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59200220 kB' 'MemUsed: 6458788 kB' 'SwapCached: 0 kB' 'Active: 2530116 kB' 'Inactive: 237284 kB' 'Active(anon): 2290692 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2543964 kB' 'Mapped: 84856 kB' 'AnonPages: 226712 kB' 'Shmem: 2067256 kB' 'KernelStack: 15416 kB' 'PageTables: 5124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271004 kB' 'Slab: 787172 kB' 'SReclaimable: 271004 kB' 'SUnreclaim: 516168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.541 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45411736 kB' 'MemUsed: 15268100 kB' 'SwapCached: 0 kB' 'Active: 9109008 kB' 'Inactive: 3456276 kB' 'Active(anon): 8868632 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3456276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12235424 kB' 'Mapped: 114076 kB' 'AnonPages: 329992 kB' 'Shmem: 8538772 kB' 'KernelStack: 11688 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314220 kB' 'Slab: 681000 kB' 'SReclaimable: 314220 kB' 'SUnreclaim: 366780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.542 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:59.543 node0=512 expecting 513 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:59.543 node1=513 expecting 512 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:59.543 00:03:59.543 real 0m3.793s 00:03:59.543 user 0m1.489s 00:03:59.543 sys 0m2.364s 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.543 14:59:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.543 ************************************ 00:03:59.543 END TEST odd_alloc 00:03:59.543 ************************************ 00:03:59.543 14:59:51 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:59.543 14:59:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.543 14:59:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.543 14:59:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.543 ************************************ 00:03:59.543 START TEST custom_alloc 00:03:59.544 ************************************ 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.544 14:59:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.845 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.845 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.845 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.105 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.105 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.371 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103593832 kB' 'MemAvailable: 107311472 kB' 'Buffers: 2704 kB' 'Cached: 14776796 kB' 'SwapCached: 0 kB' 'Active: 11640488 kB' 'Inactive: 3693560 kB' 'Active(anon): 11160688 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557836 kB' 'Mapped: 199008 kB' 'Shmem: 10606140 kB' 'KReclaimable: 585224 kB' 'Slab: 1468216 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 882992 kB' 'KernelStack: 27104 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12728760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103595596 kB' 'MemAvailable: 107313236 kB' 'Buffers: 2704 kB' 'Cached: 14776816 kB' 'SwapCached: 0 kB' 'Active: 11641296 kB' 'Inactive: 3693560 kB' 'Active(anon): 11161496 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558776 kB' 'Mapped: 199008 kB' 'Shmem: 10606160 kB' 'KReclaimable: 585224 kB' 'Slab: 1468216 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 882992 kB' 'KernelStack: 27120 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12730964 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235880 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103597944 kB' 'MemAvailable: 107315584 kB' 'Buffers: 2704 kB' 'Cached: 14776832 kB' 'SwapCached: 0 kB' 'Active: 11640540 kB' 'Inactive: 3693560 kB' 'Active(anon): 11160740 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557888 kB' 'Mapped: 198936 kB' 'Shmem: 10606176 kB' 'KReclaimable: 585224 kB' 'Slab: 1468232 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883008 kB' 'KernelStack: 27168 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12729268 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235880 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:03.377 nr_hugepages=1536 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.377 resv_hugepages=0 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.377 surplus_hugepages=0 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.377 anon_hugepages=0 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103595976 kB' 'MemAvailable: 107313616 kB' 'Buffers: 2704 kB' 'Cached: 14776852 kB' 'SwapCached: 0 kB' 'Active: 11640292 kB' 'Inactive: 3693560 kB' 'Active(anon): 11160492 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557576 kB' 'Mapped: 198936 kB' 'Shmem: 10606196 kB' 'KReclaimable: 585224 kB' 'Slab: 1468232 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883008 kB' 'KernelStack: 27200 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12729288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59203976 kB' 'MemUsed: 6455032 kB' 'SwapCached: 0 kB' 'Active: 2530256 kB' 'Inactive: 237284 kB' 'Active(anon): 2290832 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2544100 kB' 'Mapped: 84908 kB' 'AnonPages: 226636 kB' 'Shmem: 2067392 kB' 'KernelStack: 15432 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271004 kB' 'Slab: 787264 kB' 'SReclaimable: 271004 kB' 'SUnreclaim: 516260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.380 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 44391920 kB' 'MemUsed: 16287916 kB' 'SwapCached: 0 kB' 'Active: 9110572 kB' 'Inactive: 3456276 kB' 'Active(anon): 8870196 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3456276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12235480 kB' 'Mapped: 114088 kB' 'AnonPages: 331532 kB' 'Shmem: 8538828 kB' 'KernelStack: 11864 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314220 kB' 'Slab: 680968 kB' 'SReclaimable: 314220 kB' 'SUnreclaim: 366748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.643 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.644 node0=512 expecting 512 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:03.644 node1=1024 expecting 1024 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:03.644 00:04:03.644 real 0m3.884s 00:04:03.644 user 0m1.538s 00:04:03.644 sys 0m2.401s 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.644 14:59:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.644 ************************************ 00:04:03.644 END TEST custom_alloc 00:04:03.644 ************************************ 00:04:03.644 14:59:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:03.644 14:59:55 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.644 14:59:55 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.644 14:59:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.644 ************************************ 00:04:03.644 START TEST no_shrink_alloc 00:04:03.644 ************************************ 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.644 14:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.946 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.946 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.946 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104659628 kB' 'MemAvailable: 108377268 kB' 'Buffers: 2704 kB' 'Cached: 14776988 kB' 'SwapCached: 0 kB' 'Active: 11641556 kB' 'Inactive: 3693560 kB' 'Active(anon): 11161756 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558708 kB' 'Mapped: 199044 kB' 'Shmem: 10606332 kB' 'KReclaimable: 585224 kB' 'Slab: 1468372 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883148 kB' 'KernelStack: 27296 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12732068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236072 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.212 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.213 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.214 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104659264 kB' 'MemAvailable: 108376904 kB' 'Buffers: 2704 kB' 'Cached: 14776988 kB' 'SwapCached: 0 kB' 'Active: 11641828 kB' 'Inactive: 3693560 kB' 'Active(anon): 11162028 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559036 kB' 'Mapped: 199024 kB' 'Shmem: 10606332 kB' 'KReclaimable: 585224 kB' 'Slab: 1468332 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883108 kB' 'KernelStack: 27328 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12732084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236104 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.215 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.216 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.217 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104659616 kB' 'MemAvailable: 108377256 kB' 'Buffers: 2704 kB' 'Cached: 14776988 kB' 'SwapCached: 0 kB' 'Active: 11641420 kB' 'Inactive: 3693560 kB' 'Active(anon): 11161620 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558632 kB' 'Mapped: 199024 kB' 'Shmem: 10606332 kB' 'KReclaimable: 585224 kB' 'Slab: 1468384 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883160 kB' 'KernelStack: 27168 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12730380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235992 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.218 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.219 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.220 nr_hugepages=1024 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.220 resv_hugepages=0 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.220 surplus_hugepages=0 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.220 anon_hugepages=0 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.220 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104659148 kB' 'MemAvailable: 108376788 kB' 'Buffers: 2704 kB' 'Cached: 14777028 kB' 'SwapCached: 0 kB' 'Active: 11641808 kB' 'Inactive: 3693560 kB' 'Active(anon): 11162008 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559008 kB' 'Mapped: 199024 kB' 'Shmem: 10606372 kB' 'KReclaimable: 585224 kB' 'Slab: 1468384 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883160 kB' 'KernelStack: 27264 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12732128 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236104 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.221 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.222 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.485 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58162324 kB' 'MemUsed: 7496684 kB' 'SwapCached: 0 kB' 'Active: 2532268 kB' 'Inactive: 237284 kB' 'Active(anon): 2292844 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2544236 kB' 'Mapped: 84908 kB' 'AnonPages: 228552 kB' 'Shmem: 2067528 kB' 'KernelStack: 15416 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271004 kB' 'Slab: 787240 kB' 'SReclaimable: 271004 kB' 'SUnreclaim: 516236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.486 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.487 node0=1024 expecting 1024 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.487 14:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.784 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:10.784 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.784 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:11.048 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104600756 kB' 'MemAvailable: 108318396 kB' 'Buffers: 2704 kB' 'Cached: 14777140 kB' 'SwapCached: 0 kB' 'Active: 11649700 kB' 'Inactive: 3693560 kB' 'Active(anon): 11169900 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566672 kB' 'Mapped: 199920 kB' 'Shmem: 10606484 kB' 'KReclaimable: 585224 kB' 'Slab: 1468516 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883292 kB' 'KernelStack: 27344 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12740512 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236236 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.048 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.049 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104606616 kB' 'MemAvailable: 108324256 kB' 'Buffers: 2704 kB' 'Cached: 14777144 kB' 'SwapCached: 0 kB' 'Active: 11649840 kB' 'Inactive: 3693560 kB' 'Active(anon): 11170040 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566848 kB' 'Mapped: 199908 kB' 'Shmem: 10606488 kB' 'KReclaimable: 585224 kB' 'Slab: 1468492 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883268 kB' 'KernelStack: 27344 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12742264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236188 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.050 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.051 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104606136 kB' 'MemAvailable: 108323776 kB' 'Buffers: 2704 kB' 'Cached: 14777164 kB' 'SwapCached: 0 kB' 'Active: 11649748 kB' 'Inactive: 3693560 kB' 'Active(anon): 11169948 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566736 kB' 'Mapped: 199908 kB' 'Shmem: 10606508 kB' 'KReclaimable: 585224 kB' 'Slab: 1468492 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883268 kB' 'KernelStack: 27088 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12739304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236092 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.052 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.053 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.054 nr_hugepages=1024 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.054 resv_hugepages=0 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.054 surplus_hugepages=0 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.054 anon_hugepages=0 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104606224 kB' 'MemAvailable: 108323864 kB' 'Buffers: 2704 kB' 'Cached: 14777164 kB' 'SwapCached: 0 kB' 'Active: 11649456 kB' 'Inactive: 3693560 kB' 'Active(anon): 11169656 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566492 kB' 'Mapped: 199856 kB' 'Shmem: 10606508 kB' 'KReclaimable: 585224 kB' 'Slab: 1468396 kB' 'SReclaimable: 585224 kB' 'SUnreclaim: 883172 kB' 'KernelStack: 27184 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12739324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236092 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4654452 kB' 'DirectMap2M: 29628416 kB' 'DirectMap1G: 101711872 kB' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.054 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.055 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58110440 kB' 'MemUsed: 7548568 kB' 'SwapCached: 0 kB' 'Active: 2540112 kB' 'Inactive: 237284 kB' 'Active(anon): 2300688 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 237284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2544372 kB' 'Mapped: 85008 kB' 'AnonPages: 236304 kB' 'Shmem: 2067664 kB' 'KernelStack: 15496 kB' 'PageTables: 5348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271004 kB' 'Slab: 787200 kB' 'SReclaimable: 271004 kB' 'SUnreclaim: 516196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.056 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.057 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.058 node0=1024 expecting 1024 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.058 00:04:11.058 real 0m7.574s 00:04:11.058 user 0m2.944s 00:04:11.058 sys 0m4.748s 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.058 15:00:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.058 ************************************ 00:04:11.058 END TEST no_shrink_alloc 00:04:11.058 ************************************ 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:11.330 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:11.330 00:04:11.330 real 0m27.412s 00:04:11.330 user 0m10.814s 00:04:11.330 sys 0m16.945s 00:04:11.330 15:00:03 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.330 15:00:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.330 ************************************ 00:04:11.330 END TEST hugepages 00:04:11.330 ************************************ 00:04:11.330 15:00:03 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:11.330 15:00:03 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.330 15:00:03 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.330 15:00:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:11.330 ************************************ 00:04:11.331 START TEST driver 00:04:11.331 ************************************ 00:04:11.331 15:00:03 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:11.331 * Looking for test storage... 00:04:11.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:11.331 15:00:03 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:11.331 15:00:03 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.331 15:00:03 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.623 15:00:08 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:16.623 15:00:08 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.623 15:00:08 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.623 15:00:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.623 ************************************ 00:04:16.623 START TEST guess_driver 00:04:16.623 ************************************ 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:16.623 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:16.623 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:16.623 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:16.623 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:16.623 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:16.623 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:16.623 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:16.623 Looking for driver=vfio-pci 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.623 15:00:08 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.925 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.926 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.187 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.499 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:20.499 15:00:12 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:20.499 15:00:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.499 15:00:12 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.809 00:04:25.809 real 0m8.911s 00:04:25.809 user 0m2.969s 00:04:25.809 sys 0m5.158s 00:04:25.809 15:00:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.809 15:00:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 ************************************ 00:04:25.809 END TEST guess_driver 00:04:25.809 ************************************ 00:04:25.809 00:04:25.809 real 0m14.144s 00:04:25.809 user 0m4.539s 00:04:25.809 sys 0m8.028s 00:04:25.809 15:00:17 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.809 15:00:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 ************************************ 00:04:25.809 END TEST driver 00:04:25.809 ************************************ 00:04:25.809 15:00:17 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:25.809 15:00:17 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.809 15:00:17 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.809 15:00:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 ************************************ 00:04:25.809 START TEST devices 00:04:25.809 ************************************ 00:04:25.809 15:00:17 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:25.809 * Looking for test storage... 00:04:25.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.809 15:00:17 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:25.809 15:00:17 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:25.809 15:00:17 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.809 15:00:17 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:30.016 15:00:21 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:30.016 15:00:21 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:30.016 No valid GPT data, bailing 00:04:30.016 15:00:21 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.016 15:00:21 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:30.016 15:00:21 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:30.016 15:00:21 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:30.016 15:00:21 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:30.016 15:00:21 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:30.016 15:00:21 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.016 15:00:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:30.016 ************************************ 00:04:30.016 START TEST nvme_mount 00:04:30.016 ************************************ 00:04:30.016 15:00:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:30.016 15:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:30.016 15:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:30.017 15:00:21 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:30.958 Creating new GPT entries in memory. 00:04:30.958 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.958 other utilities. 00:04:30.958 15:00:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.958 15:00:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.958 15:00:22 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.958 15:00:22 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.958 15:00:22 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:31.899 Creating new GPT entries in memory. 00:04:31.899 The operation has completed successfully. 00:04:31.899 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.899 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.899 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 16004 00:04:31.899 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.899 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:31.899 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.899 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:31.899 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.900 15:00:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:35.201 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.201 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.462 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:35.462 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:35.462 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:35.462 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:35.462 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:35.462 15:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:35.463 15:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.463 15:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:35.463 15:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.724 15:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.025 15:00:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.285 15:00:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.599 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.600 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.864 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.864 00:04:42.864 real 0m13.179s 00:04:42.864 user 0m4.055s 00:04:42.864 sys 0m6.982s 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.864 15:00:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.864 ************************************ 00:04:42.864 END TEST nvme_mount 00:04:42.864 ************************************ 00:04:42.864 15:00:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:42.864 15:00:35 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.864 15:00:35 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.864 15:00:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.864 ************************************ 00:04:42.864 START TEST dm_mount 00:04:42.864 ************************************ 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.864 15:00:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:44.250 Creating new GPT entries in memory. 00:04:44.250 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.250 other utilities. 00:04:44.250 15:00:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.250 15:00:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.250 15:00:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.250 15:00:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.250 15:00:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:45.225 Creating new GPT entries in memory. 00:04:45.225 The operation has completed successfully. 00:04:45.225 15:00:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.225 15:00:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.225 15:00:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.225 15:00:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.225 15:00:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:46.166 The operation has completed successfully. 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 21134 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:46.166 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.167 15:00:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.480 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.481 15:00:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.793 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.794 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.794 15:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.054 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.054 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:53.054 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:53.054 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:53.054 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.054 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.054 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:53.314 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.314 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:53.314 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.314 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.314 15:00:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:53.314 00:04:53.314 real 0m10.220s 00:04:53.314 user 0m2.558s 00:04:53.314 sys 0m4.673s 00:04:53.314 15:00:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.314 15:00:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:53.314 ************************************ 00:04:53.314 END TEST dm_mount 00:04:53.315 ************************************ 00:04:53.315 15:00:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:53.315 15:00:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:53.315 15:00:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.315 15:00:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.315 15:00:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:53.315 15:00:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.315 15:00:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.575 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.575 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.575 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.575 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.575 15:00:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:53.575 15:00:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.575 15:00:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.575 15:00:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.575 15:00:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.575 15:00:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.575 15:00:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:53.575 00:04:53.575 real 0m28.017s 00:04:53.575 user 0m8.291s 00:04:53.575 sys 0m14.471s 00:04:53.575 15:00:45 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.575 15:00:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.575 ************************************ 00:04:53.575 END TEST devices 00:04:53.575 ************************************ 00:04:53.575 00:04:53.575 real 1m35.735s 00:04:53.575 user 0m32.434s 00:04:53.575 sys 0m54.593s 00:04:53.575 15:00:45 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.575 15:00:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.575 ************************************ 00:04:53.575 END TEST setup.sh 00:04:53.575 ************************************ 00:04:53.575 15:00:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:56.876 Hugepages 00:04:56.877 node hugesize free / total 00:04:56.877 node0 1048576kB 0 / 0 00:04:56.877 node0 2048kB 2048 / 2048 00:04:56.877 node1 1048576kB 0 / 0 00:04:56.877 node1 2048kB 0 / 0 00:04:56.877 00:04:56.877 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:56.877 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:56.877 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:56.877 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:56.877 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:56.877 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:56.877 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:56.877 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:56.877 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:57.138 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:57.138 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:57.138 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:57.138 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:57.138 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:57.138 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:57.138 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:57.138 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:57.138 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:57.138 15:00:49 -- spdk/autotest.sh@130 -- # uname -s 00:04:57.138 15:00:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:57.138 15:00:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:57.138 15:00:49 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.451 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:00.451 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:00.451 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:00.451 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:00.451 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:00.451 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:00.451 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:00.451 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:00.451 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:00.710 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:00.710 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:00.710 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:00.710 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:00.710 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:00.710 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:00.710 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:02.618 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:02.618 15:00:54 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:04.004 15:00:55 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:04.004 15:00:55 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:04.004 15:00:55 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.004 15:00:55 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:04.004 15:00:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:04.004 15:00:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:04.004 15:00:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.004 15:00:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:04.004 15:00:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:04.004 15:00:55 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:04.004 15:00:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:04.004 15:00:55 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.305 Waiting for block devices as requested 00:05:07.305 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:07.305 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:07.305 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:07.305 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:07.599 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:07.599 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:07.599 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:07.599 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:07.866 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:07.866 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:08.127 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:08.127 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:08.127 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:08.127 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:08.388 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:08.388 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:08.388 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:08.649 15:01:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:08.649 15:01:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:08.649 15:01:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:08.649 15:01:00 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:08.649 15:01:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:08.649 15:01:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:08.649 15:01:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:08.649 15:01:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:08.649 15:01:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:08.649 15:01:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:08.649 15:01:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:08.649 15:01:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:08.649 15:01:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:08.649 15:01:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:08.649 15:01:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:08.649 15:01:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:08.649 15:01:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:08.649 15:01:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:08.649 15:01:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:08.649 15:01:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:08.649 15:01:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:08.649 15:01:00 -- common/autotest_common.sh@1557 -- # continue 00:05:08.649 15:01:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:08.649 15:01:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.649 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:05:08.909 15:01:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:08.909 15:01:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.909 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:05:08.909 15:01:00 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.212 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:12.473 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:12.473 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:12.473 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:12.473 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:12.734 15:01:04 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:12.734 15:01:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:12.734 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:12.734 15:01:04 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:12.734 15:01:04 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:12.734 15:01:04 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.734 15:01:04 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:12.734 15:01:04 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:12.734 15:01:04 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:12.734 15:01:04 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:12.734 15:01:04 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:12.734 15:01:04 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.734 15:01:04 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.734 15:01:04 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:12.996 15:01:04 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:12.996 15:01:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:12.996 15:01:04 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:12.996 15:01:04 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:12.996 15:01:04 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:12.996 15:01:04 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:12.996 15:01:04 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:12.996 15:01:04 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:12.996 15:01:04 -- common/autotest_common.sh@1593 -- # return 0 00:05:12.996 15:01:04 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:12.996 15:01:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:12.996 15:01:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:12.996 15:01:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:12.996 15:01:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:12.996 15:01:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.996 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:12.996 15:01:04 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:12.996 15:01:04 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:12.996 15:01:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.996 15:01:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.996 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:12.996 ************************************ 00:05:12.996 START TEST env 00:05:12.996 ************************************ 00:05:12.996 15:01:04 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:12.996 * Looking for test storage... 00:05:12.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:12.996 15:01:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:12.996 15:01:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.996 15:01:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.996 15:01:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.996 ************************************ 00:05:12.996 START TEST env_memory 00:05:12.996 ************************************ 00:05:12.996 15:01:05 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:12.996 00:05:12.996 00:05:12.996 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.996 http://cunit.sourceforge.net/ 00:05:12.996 00:05:12.996 00:05:12.996 Suite: memory 00:05:12.996 Test: alloc and free memory map ...[2024-07-25 15:01:05.185042] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:13.258 passed 00:05:13.258 Test: mem map translation ...[2024-07-25 15:01:05.210814] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:13.258 [2024-07-25 15:01:05.210850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:13.258 [2024-07-25 15:01:05.210896] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:13.258 [2024-07-25 15:01:05.210903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:13.258 passed 00:05:13.258 Test: mem map registration ...[2024-07-25 15:01:05.266348] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:13.258 [2024-07-25 15:01:05.266385] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:13.258 passed 00:05:13.258 Test: mem map adjacent registrations ...passed 00:05:13.258 00:05:13.258 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.258 suites 1 1 n/a 0 0 00:05:13.258 tests 4 4 4 0 0 00:05:13.258 asserts 152 152 152 0 n/a 00:05:13.258 00:05:13.258 Elapsed time = 0.194 seconds 00:05:13.258 00:05:13.258 real 0m0.210s 00:05:13.258 user 0m0.196s 00:05:13.258 sys 0m0.012s 00:05:13.258 15:01:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.258 15:01:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:13.258 ************************************ 00:05:13.258 END TEST env_memory 00:05:13.258 ************************************ 00:05:13.258 15:01:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:13.258 15:01:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.258 15:01:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.258 15:01:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.258 ************************************ 00:05:13.258 START TEST env_vtophys 00:05:13.258 ************************************ 00:05:13.258 15:01:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:13.258 EAL: lib.eal log level changed from notice to debug 00:05:13.258 EAL: Detected lcore 0 as core 0 on socket 0 00:05:13.258 EAL: Detected lcore 1 as core 1 on socket 0 00:05:13.258 EAL: Detected lcore 2 as core 2 on socket 0 00:05:13.258 EAL: Detected lcore 3 as core 3 on socket 0 00:05:13.258 EAL: Detected lcore 4 as core 4 on socket 0 00:05:13.258 EAL: Detected lcore 5 as core 5 on socket 0 00:05:13.258 EAL: Detected lcore 6 as core 6 on socket 0 00:05:13.258 EAL: Detected lcore 7 as core 7 on socket 0 00:05:13.258 EAL: Detected lcore 8 as core 8 on socket 0 00:05:13.258 EAL: Detected lcore 9 as core 9 on socket 0 00:05:13.258 EAL: Detected lcore 10 as core 10 on socket 0 00:05:13.258 EAL: Detected lcore 11 as core 11 on socket 0 00:05:13.258 EAL: Detected lcore 12 as core 12 on socket 0 00:05:13.258 EAL: Detected lcore 13 as core 13 on socket 0 00:05:13.258 EAL: Detected lcore 14 as core 14 on socket 0 00:05:13.258 EAL: Detected lcore 15 as core 15 on socket 0 00:05:13.258 EAL: Detected lcore 16 as core 16 on socket 0 00:05:13.258 EAL: Detected lcore 17 as core 17 on socket 0 00:05:13.258 EAL: Detected lcore 18 as core 18 on socket 0 00:05:13.258 EAL: Detected lcore 19 as core 19 on socket 0 00:05:13.258 EAL: Detected lcore 20 as core 20 on socket 0 00:05:13.258 EAL: Detected lcore 21 as core 21 on socket 0 00:05:13.258 EAL: Detected lcore 22 as core 22 on socket 0 00:05:13.258 EAL: Detected lcore 23 as core 23 on socket 0 00:05:13.258 EAL: Detected lcore 24 as core 24 on socket 0 00:05:13.258 EAL: Detected lcore 25 as core 25 on socket 0 00:05:13.258 EAL: Detected lcore 26 as core 26 on socket 0 00:05:13.258 EAL: Detected lcore 27 as core 27 on socket 0 00:05:13.258 EAL: Detected lcore 28 as core 28 on socket 0 00:05:13.258 EAL: Detected lcore 29 as core 29 on socket 0 00:05:13.258 EAL: Detected lcore 30 as core 30 on socket 0 00:05:13.258 EAL: Detected lcore 31 as core 31 on socket 0 00:05:13.258 EAL: Detected lcore 32 as core 32 on socket 0 00:05:13.258 EAL: Detected lcore 33 as core 33 on socket 0 00:05:13.258 EAL: Detected lcore 34 as core 34 on socket 0 00:05:13.258 EAL: Detected lcore 35 as core 35 on socket 0 00:05:13.258 EAL: Detected lcore 36 as core 0 on socket 1 00:05:13.258 EAL: Detected lcore 37 as core 1 on socket 1 00:05:13.258 EAL: Detected lcore 38 as core 2 on socket 1 00:05:13.258 EAL: Detected lcore 39 as core 3 on socket 1 00:05:13.258 EAL: Detected lcore 40 as core 4 on socket 1 00:05:13.258 EAL: Detected lcore 41 as core 5 on socket 1 00:05:13.258 EAL: Detected lcore 42 as core 6 on socket 1 00:05:13.258 EAL: Detected lcore 43 as core 7 on socket 1 00:05:13.258 EAL: Detected lcore 44 as core 8 on socket 1 00:05:13.258 EAL: Detected lcore 45 as core 9 on socket 1 00:05:13.258 EAL: Detected lcore 46 as core 10 on socket 1 00:05:13.258 EAL: Detected lcore 47 as core 11 on socket 1 00:05:13.258 EAL: Detected lcore 48 as core 12 on socket 1 00:05:13.258 EAL: Detected lcore 49 as core 13 on socket 1 00:05:13.258 EAL: Detected lcore 50 as core 14 on socket 1 00:05:13.259 EAL: Detected lcore 51 as core 15 on socket 1 00:05:13.259 EAL: Detected lcore 52 as core 16 on socket 1 00:05:13.259 EAL: Detected lcore 53 as core 17 on socket 1 00:05:13.259 EAL: Detected lcore 54 as core 18 on socket 1 00:05:13.259 EAL: Detected lcore 55 as core 19 on socket 1 00:05:13.259 EAL: Detected lcore 56 as core 20 on socket 1 00:05:13.259 EAL: Detected lcore 57 as core 21 on socket 1 00:05:13.259 EAL: Detected lcore 58 as core 22 on socket 1 00:05:13.259 EAL: Detected lcore 59 as core 23 on socket 1 00:05:13.259 EAL: Detected lcore 60 as core 24 on socket 1 00:05:13.259 EAL: Detected lcore 61 as core 25 on socket 1 00:05:13.259 EAL: Detected lcore 62 as core 26 on socket 1 00:05:13.521 EAL: Detected lcore 63 as core 27 on socket 1 00:05:13.521 EAL: Detected lcore 64 as core 28 on socket 1 00:05:13.521 EAL: Detected lcore 65 as core 29 on socket 1 00:05:13.521 EAL: Detected lcore 66 as core 30 on socket 1 00:05:13.521 EAL: Detected lcore 67 as core 31 on socket 1 00:05:13.521 EAL: Detected lcore 68 as core 32 on socket 1 00:05:13.521 EAL: Detected lcore 69 as core 33 on socket 1 00:05:13.521 EAL: Detected lcore 70 as core 34 on socket 1 00:05:13.521 EAL: Detected lcore 71 as core 35 on socket 1 00:05:13.521 EAL: Detected lcore 72 as core 0 on socket 0 00:05:13.521 EAL: Detected lcore 73 as core 1 on socket 0 00:05:13.521 EAL: Detected lcore 74 as core 2 on socket 0 00:05:13.521 EAL: Detected lcore 75 as core 3 on socket 0 00:05:13.521 EAL: Detected lcore 76 as core 4 on socket 0 00:05:13.521 EAL: Detected lcore 77 as core 5 on socket 0 00:05:13.521 EAL: Detected lcore 78 as core 6 on socket 0 00:05:13.521 EAL: Detected lcore 79 as core 7 on socket 0 00:05:13.521 EAL: Detected lcore 80 as core 8 on socket 0 00:05:13.521 EAL: Detected lcore 81 as core 9 on socket 0 00:05:13.521 EAL: Detected lcore 82 as core 10 on socket 0 00:05:13.521 EAL: Detected lcore 83 as core 11 on socket 0 00:05:13.521 EAL: Detected lcore 84 as core 12 on socket 0 00:05:13.521 EAL: Detected lcore 85 as core 13 on socket 0 00:05:13.521 EAL: Detected lcore 86 as core 14 on socket 0 00:05:13.521 EAL: Detected lcore 87 as core 15 on socket 0 00:05:13.521 EAL: Detected lcore 88 as core 16 on socket 0 00:05:13.521 EAL: Detected lcore 89 as core 17 on socket 0 00:05:13.521 EAL: Detected lcore 90 as core 18 on socket 0 00:05:13.521 EAL: Detected lcore 91 as core 19 on socket 0 00:05:13.521 EAL: Detected lcore 92 as core 20 on socket 0 00:05:13.521 EAL: Detected lcore 93 as core 21 on socket 0 00:05:13.521 EAL: Detected lcore 94 as core 22 on socket 0 00:05:13.521 EAL: Detected lcore 95 as core 23 on socket 0 00:05:13.521 EAL: Detected lcore 96 as core 24 on socket 0 00:05:13.521 EAL: Detected lcore 97 as core 25 on socket 0 00:05:13.521 EAL: Detected lcore 98 as core 26 on socket 0 00:05:13.521 EAL: Detected lcore 99 as core 27 on socket 0 00:05:13.521 EAL: Detected lcore 100 as core 28 on socket 0 00:05:13.521 EAL: Detected lcore 101 as core 29 on socket 0 00:05:13.521 EAL: Detected lcore 102 as core 30 on socket 0 00:05:13.521 EAL: Detected lcore 103 as core 31 on socket 0 00:05:13.521 EAL: Detected lcore 104 as core 32 on socket 0 00:05:13.521 EAL: Detected lcore 105 as core 33 on socket 0 00:05:13.521 EAL: Detected lcore 106 as core 34 on socket 0 00:05:13.521 EAL: Detected lcore 107 as core 35 on socket 0 00:05:13.521 EAL: Detected lcore 108 as core 0 on socket 1 00:05:13.521 EAL: Detected lcore 109 as core 1 on socket 1 00:05:13.521 EAL: Detected lcore 110 as core 2 on socket 1 00:05:13.521 EAL: Detected lcore 111 as core 3 on socket 1 00:05:13.521 EAL: Detected lcore 112 as core 4 on socket 1 00:05:13.521 EAL: Detected lcore 113 as core 5 on socket 1 00:05:13.521 EAL: Detected lcore 114 as core 6 on socket 1 00:05:13.521 EAL: Detected lcore 115 as core 7 on socket 1 00:05:13.521 EAL: Detected lcore 116 as core 8 on socket 1 00:05:13.521 EAL: Detected lcore 117 as core 9 on socket 1 00:05:13.521 EAL: Detected lcore 118 as core 10 on socket 1 00:05:13.521 EAL: Detected lcore 119 as core 11 on socket 1 00:05:13.521 EAL: Detected lcore 120 as core 12 on socket 1 00:05:13.521 EAL: Detected lcore 121 as core 13 on socket 1 00:05:13.521 EAL: Detected lcore 122 as core 14 on socket 1 00:05:13.521 EAL: Detected lcore 123 as core 15 on socket 1 00:05:13.521 EAL: Detected lcore 124 as core 16 on socket 1 00:05:13.521 EAL: Detected lcore 125 as core 17 on socket 1 00:05:13.521 EAL: Detected lcore 126 as core 18 on socket 1 00:05:13.521 EAL: Detected lcore 127 as core 19 on socket 1 00:05:13.522 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:13.522 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:13.522 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:13.522 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:13.522 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:13.522 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:13.522 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:13.522 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:13.522 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:13.522 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:13.522 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:13.522 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:13.522 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:13.522 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:13.522 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:13.522 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:13.522 EAL: Maximum logical cores by configuration: 128 00:05:13.522 EAL: Detected CPU lcores: 128 00:05:13.522 EAL: Detected NUMA nodes: 2 00:05:13.522 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:13.522 EAL: Detected shared linkage of DPDK 00:05:13.522 EAL: No shared files mode enabled, IPC will be disabled 00:05:13.522 EAL: Bus pci wants IOVA as 'DC' 00:05:13.522 EAL: Buses did not request a specific IOVA mode. 00:05:13.522 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:13.522 EAL: Selected IOVA mode 'VA' 00:05:13.522 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.522 EAL: Probing VFIO support... 00:05:13.522 EAL: IOMMU type 1 (Type 1) is supported 00:05:13.522 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:13.522 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:13.522 EAL: VFIO support initialized 00:05:13.522 EAL: Ask a virtual area of 0x2e000 bytes 00:05:13.522 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:13.522 EAL: Setting up physically contiguous memory... 00:05:13.522 EAL: Setting maximum number of open files to 524288 00:05:13.522 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:13.522 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:13.522 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:13.522 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.522 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:13.522 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.522 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.522 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:13.522 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:13.522 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.522 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:13.522 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.522 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.522 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:13.522 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:13.522 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.522 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:13.522 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.522 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.522 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:13.522 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:13.522 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.522 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:13.522 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.522 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.522 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:13.522 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:13.522 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:13.522 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.522 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:13.522 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.522 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.522 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:13.522 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:13.522 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.522 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:13.522 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.522 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.522 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:13.522 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:13.522 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.522 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:13.522 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.522 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.522 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:13.522 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:13.522 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.522 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:13.522 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.522 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.522 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:13.522 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:13.522 EAL: Hugepages will be freed exactly as allocated. 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: TSC frequency is ~2400000 KHz 00:05:13.522 EAL: Main lcore 0 is ready (tid=7ff50cdb6a00;cpuset=[0]) 00:05:13.522 EAL: Trying to obtain current memory policy. 00:05:13.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.522 EAL: Restoring previous memory policy: 0 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was expanded by 2MB 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:13.522 EAL: Mem event callback 'spdk:(nil)' registered 00:05:13.522 00:05:13.522 00:05:13.522 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.522 http://cunit.sourceforge.net/ 00:05:13.522 00:05:13.522 00:05:13.522 Suite: components_suite 00:05:13.522 Test: vtophys_malloc_test ...passed 00:05:13.522 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:13.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.522 EAL: Restoring previous memory policy: 4 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was expanded by 4MB 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was shrunk by 4MB 00:05:13.522 EAL: Trying to obtain current memory policy. 00:05:13.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.522 EAL: Restoring previous memory policy: 4 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was expanded by 6MB 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was shrunk by 6MB 00:05:13.522 EAL: Trying to obtain current memory policy. 00:05:13.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.522 EAL: Restoring previous memory policy: 4 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was expanded by 10MB 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was shrunk by 10MB 00:05:13.522 EAL: Trying to obtain current memory policy. 00:05:13.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.522 EAL: Restoring previous memory policy: 4 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was expanded by 18MB 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was shrunk by 18MB 00:05:13.522 EAL: Trying to obtain current memory policy. 00:05:13.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.522 EAL: Restoring previous memory policy: 4 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was expanded by 34MB 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was shrunk by 34MB 00:05:13.522 EAL: Trying to obtain current memory policy. 00:05:13.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.522 EAL: Restoring previous memory policy: 4 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.522 EAL: Heap on socket 0 was expanded by 66MB 00:05:13.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.522 EAL: request: mp_malloc_sync 00:05:13.522 EAL: No shared files mode enabled, IPC is disabled 00:05:13.523 EAL: Heap on socket 0 was shrunk by 66MB 00:05:13.523 EAL: Trying to obtain current memory policy. 00:05:13.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.523 EAL: Restoring previous memory policy: 4 00:05:13.523 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.523 EAL: request: mp_malloc_sync 00:05:13.523 EAL: No shared files mode enabled, IPC is disabled 00:05:13.523 EAL: Heap on socket 0 was expanded by 130MB 00:05:13.523 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.523 EAL: request: mp_malloc_sync 00:05:13.523 EAL: No shared files mode enabled, IPC is disabled 00:05:13.523 EAL: Heap on socket 0 was shrunk by 130MB 00:05:13.523 EAL: Trying to obtain current memory policy. 00:05:13.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.523 EAL: Restoring previous memory policy: 4 00:05:13.523 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.523 EAL: request: mp_malloc_sync 00:05:13.523 EAL: No shared files mode enabled, IPC is disabled 00:05:13.523 EAL: Heap on socket 0 was expanded by 258MB 00:05:13.523 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.523 EAL: request: mp_malloc_sync 00:05:13.523 EAL: No shared files mode enabled, IPC is disabled 00:05:13.523 EAL: Heap on socket 0 was shrunk by 258MB 00:05:13.523 EAL: Trying to obtain current memory policy. 00:05:13.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.784 EAL: Restoring previous memory policy: 4 00:05:13.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.784 EAL: request: mp_malloc_sync 00:05:13.784 EAL: No shared files mode enabled, IPC is disabled 00:05:13.784 EAL: Heap on socket 0 was expanded by 514MB 00:05:13.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.784 EAL: request: mp_malloc_sync 00:05:13.784 EAL: No shared files mode enabled, IPC is disabled 00:05:13.784 EAL: Heap on socket 0 was shrunk by 514MB 00:05:13.784 EAL: Trying to obtain current memory policy. 00:05:13.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.045 EAL: Restoring previous memory policy: 4 00:05:14.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.045 EAL: request: mp_malloc_sync 00:05:14.045 EAL: No shared files mode enabled, IPC is disabled 00:05:14.045 EAL: Heap on socket 0 was expanded by 1026MB 00:05:14.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.045 EAL: request: mp_malloc_sync 00:05:14.045 EAL: No shared files mode enabled, IPC is disabled 00:05:14.045 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:14.045 passed 00:05:14.045 00:05:14.045 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.045 suites 1 1 n/a 0 0 00:05:14.045 tests 2 2 2 0 0 00:05:14.045 asserts 497 497 497 0 n/a 00:05:14.045 00:05:14.045 Elapsed time = 0.641 seconds 00:05:14.045 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.045 EAL: request: mp_malloc_sync 00:05:14.045 EAL: No shared files mode enabled, IPC is disabled 00:05:14.045 EAL: Heap on socket 0 was shrunk by 2MB 00:05:14.045 EAL: No shared files mode enabled, IPC is disabled 00:05:14.045 EAL: No shared files mode enabled, IPC is disabled 00:05:14.045 EAL: No shared files mode enabled, IPC is disabled 00:05:14.045 00:05:14.045 real 0m0.759s 00:05:14.045 user 0m0.409s 00:05:14.045 sys 0m0.322s 00:05:14.045 15:01:06 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.045 15:01:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:14.045 ************************************ 00:05:14.045 END TEST env_vtophys 00:05:14.045 ************************************ 00:05:14.045 15:01:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:14.045 15:01:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.045 15:01:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.045 15:01:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.306 ************************************ 00:05:14.306 START TEST env_pci 00:05:14.306 ************************************ 00:05:14.306 15:01:06 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:14.306 00:05:14.306 00:05:14.306 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.306 http://cunit.sourceforge.net/ 00:05:14.306 00:05:14.306 00:05:14.306 Suite: pci 00:05:14.306 Test: pci_hook ...[2024-07-25 15:01:06.278394] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 32209 has claimed it 00:05:14.306 EAL: Cannot find device (10000:00:01.0) 00:05:14.306 EAL: Failed to attach device on primary process 00:05:14.306 passed 00:05:14.306 00:05:14.306 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.306 suites 1 1 n/a 0 0 00:05:14.306 tests 1 1 1 0 0 00:05:14.306 asserts 25 25 25 0 n/a 00:05:14.306 00:05:14.306 Elapsed time = 0.029 seconds 00:05:14.306 00:05:14.306 real 0m0.049s 00:05:14.306 user 0m0.017s 00:05:14.306 sys 0m0.032s 00:05:14.306 15:01:06 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.306 15:01:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:14.306 ************************************ 00:05:14.306 END TEST env_pci 00:05:14.306 ************************************ 00:05:14.306 15:01:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:14.306 15:01:06 env -- env/env.sh@15 -- # uname 00:05:14.306 15:01:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:14.306 15:01:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:14.306 15:01:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.306 15:01:06 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:14.306 15:01:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.306 15:01:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.306 ************************************ 00:05:14.306 START TEST env_dpdk_post_init 00:05:14.306 ************************************ 00:05:14.306 15:01:06 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.306 EAL: Detected CPU lcores: 128 00:05:14.306 EAL: Detected NUMA nodes: 2 00:05:14.306 EAL: Detected shared linkage of DPDK 00:05:14.306 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.306 EAL: Selected IOVA mode 'VA' 00:05:14.306 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.306 EAL: VFIO support initialized 00:05:14.306 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.567 EAL: Using IOMMU type 1 (Type 1) 00:05:14.567 EAL: Ignore mapping IO port bar(1) 00:05:14.567 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:14.827 EAL: Ignore mapping IO port bar(1) 00:05:14.827 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:15.087 EAL: Ignore mapping IO port bar(1) 00:05:15.087 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:15.347 EAL: Ignore mapping IO port bar(1) 00:05:15.347 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:15.347 EAL: Ignore mapping IO port bar(1) 00:05:15.607 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:15.608 EAL: Ignore mapping IO port bar(1) 00:05:15.869 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:15.869 EAL: Ignore mapping IO port bar(1) 00:05:16.129 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:16.129 EAL: Ignore mapping IO port bar(1) 00:05:16.129 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:16.391 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:16.651 EAL: Ignore mapping IO port bar(1) 00:05:16.651 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:16.911 EAL: Ignore mapping IO port bar(1) 00:05:16.911 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:16.911 EAL: Ignore mapping IO port bar(1) 00:05:17.172 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:17.172 EAL: Ignore mapping IO port bar(1) 00:05:17.433 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:17.433 EAL: Ignore mapping IO port bar(1) 00:05:17.692 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:17.692 EAL: Ignore mapping IO port bar(1) 00:05:17.692 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:17.952 EAL: Ignore mapping IO port bar(1) 00:05:17.952 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:18.213 EAL: Ignore mapping IO port bar(1) 00:05:18.213 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:18.213 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:18.213 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:18.473 Starting DPDK initialization... 00:05:18.473 Starting SPDK post initialization... 00:05:18.473 SPDK NVMe probe 00:05:18.473 Attaching to 0000:65:00.0 00:05:18.473 Attached to 0000:65:00.0 00:05:18.473 Cleaning up... 00:05:20.385 00:05:20.385 real 0m5.709s 00:05:20.385 user 0m0.178s 00:05:20.385 sys 0m0.077s 00:05:20.385 15:01:12 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.385 15:01:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:20.385 ************************************ 00:05:20.385 END TEST env_dpdk_post_init 00:05:20.385 ************************************ 00:05:20.385 15:01:12 env -- env/env.sh@26 -- # uname 00:05:20.385 15:01:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:20.385 15:01:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.385 15:01:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.385 15:01:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.385 15:01:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.385 ************************************ 00:05:20.385 START TEST env_mem_callbacks 00:05:20.385 ************************************ 00:05:20.385 15:01:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.385 EAL: Detected CPU lcores: 128 00:05:20.385 EAL: Detected NUMA nodes: 2 00:05:20.385 EAL: Detected shared linkage of DPDK 00:05:20.385 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.385 EAL: Selected IOVA mode 'VA' 00:05:20.385 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.385 EAL: VFIO support initialized 00:05:20.385 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.385 00:05:20.385 00:05:20.385 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.385 http://cunit.sourceforge.net/ 00:05:20.385 00:05:20.385 00:05:20.385 Suite: memory 00:05:20.385 Test: test ... 00:05:20.385 register 0x200000200000 2097152 00:05:20.385 malloc 3145728 00:05:20.385 register 0x200000400000 4194304 00:05:20.385 buf 0x200000500000 len 3145728 PASSED 00:05:20.385 malloc 64 00:05:20.385 buf 0x2000004fff40 len 64 PASSED 00:05:20.385 malloc 4194304 00:05:20.385 register 0x200000800000 6291456 00:05:20.385 buf 0x200000a00000 len 4194304 PASSED 00:05:20.385 free 0x200000500000 3145728 00:05:20.385 free 0x2000004fff40 64 00:05:20.385 unregister 0x200000400000 4194304 PASSED 00:05:20.385 free 0x200000a00000 4194304 00:05:20.385 unregister 0x200000800000 6291456 PASSED 00:05:20.385 malloc 8388608 00:05:20.385 register 0x200000400000 10485760 00:05:20.385 buf 0x200000600000 len 8388608 PASSED 00:05:20.385 free 0x200000600000 8388608 00:05:20.385 unregister 0x200000400000 10485760 PASSED 00:05:20.385 passed 00:05:20.385 00:05:20.385 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.385 suites 1 1 n/a 0 0 00:05:20.385 tests 1 1 1 0 0 00:05:20.385 asserts 15 15 15 0 n/a 00:05:20.385 00:05:20.385 Elapsed time = 0.005 seconds 00:05:20.385 00:05:20.385 real 0m0.061s 00:05:20.385 user 0m0.018s 00:05:20.385 sys 0m0.043s 00:05:20.385 15:01:12 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.385 15:01:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:20.385 ************************************ 00:05:20.385 END TEST env_mem_callbacks 00:05:20.385 ************************************ 00:05:20.385 00:05:20.385 real 0m7.301s 00:05:20.385 user 0m1.007s 00:05:20.385 sys 0m0.842s 00:05:20.385 15:01:12 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.385 15:01:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.385 ************************************ 00:05:20.385 END TEST env 00:05:20.385 ************************************ 00:05:20.385 15:01:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.385 15:01:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.385 15:01:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.385 15:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:20.385 ************************************ 00:05:20.385 START TEST rpc 00:05:20.385 ************************************ 00:05:20.385 15:01:12 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.385 * Looking for test storage... 00:05:20.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:20.385 15:01:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=33651 00:05:20.385 15:01:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.385 15:01:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:20.385 15:01:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 33651 00:05:20.385 15:01:12 rpc -- common/autotest_common.sh@831 -- # '[' -z 33651 ']' 00:05:20.385 15:01:12 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.385 15:01:12 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.385 15:01:12 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.385 15:01:12 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.385 15:01:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.385 [2024-07-25 15:01:12.530741] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:20.385 [2024-07-25 15:01:12.530803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid33651 ] 00:05:20.385 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.646 [2024-07-25 15:01:12.596180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.646 [2024-07-25 15:01:12.669401] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:20.646 [2024-07-25 15:01:12.669442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 33651' to capture a snapshot of events at runtime. 00:05:20.646 [2024-07-25 15:01:12.669450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.646 [2024-07-25 15:01:12.669456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.646 [2024-07-25 15:01:12.669462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid33651 for offline analysis/debug. 00:05:20.646 [2024-07-25 15:01:12.669486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.217 15:01:13 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.217 15:01:13 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:21.217 15:01:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.217 15:01:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.217 15:01:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:21.217 15:01:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:21.217 15:01:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.217 15:01:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.217 15:01:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.217 ************************************ 00:05:21.217 START TEST rpc_integrity 00:05:21.217 ************************************ 00:05:21.217 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:21.217 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.217 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.217 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.217 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.217 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.217 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.217 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.217 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.217 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.217 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.477 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.477 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:21.477 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.477 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.477 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.478 { 00:05:21.478 "name": "Malloc0", 00:05:21.478 "aliases": [ 00:05:21.478 "d85154c9-d018-4f95-8b28-95c5d7f5b24e" 00:05:21.478 ], 00:05:21.478 "product_name": "Malloc disk", 00:05:21.478 "block_size": 512, 00:05:21.478 "num_blocks": 16384, 00:05:21.478 "uuid": "d85154c9-d018-4f95-8b28-95c5d7f5b24e", 00:05:21.478 "assigned_rate_limits": { 00:05:21.478 "rw_ios_per_sec": 0, 00:05:21.478 "rw_mbytes_per_sec": 0, 00:05:21.478 "r_mbytes_per_sec": 0, 00:05:21.478 "w_mbytes_per_sec": 0 00:05:21.478 }, 00:05:21.478 "claimed": false, 00:05:21.478 "zoned": false, 00:05:21.478 "supported_io_types": { 00:05:21.478 "read": true, 00:05:21.478 "write": true, 00:05:21.478 "unmap": true, 00:05:21.478 "flush": true, 00:05:21.478 "reset": true, 00:05:21.478 "nvme_admin": false, 00:05:21.478 "nvme_io": false, 00:05:21.478 "nvme_io_md": false, 00:05:21.478 "write_zeroes": true, 00:05:21.478 "zcopy": true, 00:05:21.478 "get_zone_info": false, 00:05:21.478 "zone_management": false, 00:05:21.478 "zone_append": false, 00:05:21.478 "compare": false, 00:05:21.478 "compare_and_write": false, 00:05:21.478 "abort": true, 00:05:21.478 "seek_hole": false, 00:05:21.478 "seek_data": false, 00:05:21.478 "copy": true, 00:05:21.478 "nvme_iov_md": false 00:05:21.478 }, 00:05:21.478 "memory_domains": [ 00:05:21.478 { 00:05:21.478 "dma_device_id": "system", 00:05:21.478 "dma_device_type": 1 00:05:21.478 }, 00:05:21.478 { 00:05:21.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.478 "dma_device_type": 2 00:05:21.478 } 00:05:21.478 ], 00:05:21.478 "driver_specific": {} 00:05:21.478 } 00:05:21.478 ]' 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.478 [2024-07-25 15:01:13.488083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:21.478 [2024-07-25 15:01:13.488121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.478 [2024-07-25 15:01:13.488134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8f2d80 00:05:21.478 [2024-07-25 15:01:13.488142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.478 [2024-07-25 15:01:13.489489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.478 [2024-07-25 15:01:13.489511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.478 Passthru0 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.478 { 00:05:21.478 "name": "Malloc0", 00:05:21.478 "aliases": [ 00:05:21.478 "d85154c9-d018-4f95-8b28-95c5d7f5b24e" 00:05:21.478 ], 00:05:21.478 "product_name": "Malloc disk", 00:05:21.478 "block_size": 512, 00:05:21.478 "num_blocks": 16384, 00:05:21.478 "uuid": "d85154c9-d018-4f95-8b28-95c5d7f5b24e", 00:05:21.478 "assigned_rate_limits": { 00:05:21.478 "rw_ios_per_sec": 0, 00:05:21.478 "rw_mbytes_per_sec": 0, 00:05:21.478 "r_mbytes_per_sec": 0, 00:05:21.478 "w_mbytes_per_sec": 0 00:05:21.478 }, 00:05:21.478 "claimed": true, 00:05:21.478 "claim_type": "exclusive_write", 00:05:21.478 "zoned": false, 00:05:21.478 "supported_io_types": { 00:05:21.478 "read": true, 00:05:21.478 "write": true, 00:05:21.478 "unmap": true, 00:05:21.478 "flush": true, 00:05:21.478 "reset": true, 00:05:21.478 "nvme_admin": false, 00:05:21.478 "nvme_io": false, 00:05:21.478 "nvme_io_md": false, 00:05:21.478 "write_zeroes": true, 00:05:21.478 "zcopy": true, 00:05:21.478 "get_zone_info": false, 00:05:21.478 "zone_management": false, 00:05:21.478 "zone_append": false, 00:05:21.478 "compare": false, 00:05:21.478 "compare_and_write": false, 00:05:21.478 "abort": true, 00:05:21.478 "seek_hole": false, 00:05:21.478 "seek_data": false, 00:05:21.478 "copy": true, 00:05:21.478 "nvme_iov_md": false 00:05:21.478 }, 00:05:21.478 "memory_domains": [ 00:05:21.478 { 00:05:21.478 "dma_device_id": "system", 00:05:21.478 "dma_device_type": 1 00:05:21.478 }, 00:05:21.478 { 00:05:21.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.478 "dma_device_type": 2 00:05:21.478 } 00:05:21.478 ], 00:05:21.478 "driver_specific": {} 00:05:21.478 }, 00:05:21.478 { 00:05:21.478 "name": "Passthru0", 00:05:21.478 "aliases": [ 00:05:21.478 "adcfcab4-2459-5c81-81ce-a9249d711259" 00:05:21.478 ], 00:05:21.478 "product_name": "passthru", 00:05:21.478 "block_size": 512, 00:05:21.478 "num_blocks": 16384, 00:05:21.478 "uuid": "adcfcab4-2459-5c81-81ce-a9249d711259", 00:05:21.478 "assigned_rate_limits": { 00:05:21.478 "rw_ios_per_sec": 0, 00:05:21.478 "rw_mbytes_per_sec": 0, 00:05:21.478 "r_mbytes_per_sec": 0, 00:05:21.478 "w_mbytes_per_sec": 0 00:05:21.478 }, 00:05:21.478 "claimed": false, 00:05:21.478 "zoned": false, 00:05:21.478 "supported_io_types": { 00:05:21.478 "read": true, 00:05:21.478 "write": true, 00:05:21.478 "unmap": true, 00:05:21.478 "flush": true, 00:05:21.478 "reset": true, 00:05:21.478 "nvme_admin": false, 00:05:21.478 "nvme_io": false, 00:05:21.478 "nvme_io_md": false, 00:05:21.478 "write_zeroes": true, 00:05:21.478 "zcopy": true, 00:05:21.478 "get_zone_info": false, 00:05:21.478 "zone_management": false, 00:05:21.478 "zone_append": false, 00:05:21.478 "compare": false, 00:05:21.478 "compare_and_write": false, 00:05:21.478 "abort": true, 00:05:21.478 "seek_hole": false, 00:05:21.478 "seek_data": false, 00:05:21.478 "copy": true, 00:05:21.478 "nvme_iov_md": false 00:05:21.478 }, 00:05:21.478 "memory_domains": [ 00:05:21.478 { 00:05:21.478 "dma_device_id": "system", 00:05:21.478 "dma_device_type": 1 00:05:21.478 }, 00:05:21.478 { 00:05:21.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.478 "dma_device_type": 2 00:05:21.478 } 00:05:21.478 ], 00:05:21.478 "driver_specific": { 00:05:21.478 "passthru": { 00:05:21.478 "name": "Passthru0", 00:05:21.478 "base_bdev_name": "Malloc0" 00:05:21.478 } 00:05:21.478 } 00:05:21.478 } 00:05:21.478 ]' 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.478 15:01:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.478 00:05:21.478 real 0m0.303s 00:05:21.478 user 0m0.189s 00:05:21.478 sys 0m0.047s 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.478 15:01:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.478 ************************************ 00:05:21.478 END TEST rpc_integrity 00:05:21.478 ************************************ 00:05:21.769 15:01:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:21.769 15:01:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.769 15:01:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.769 15:01:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.769 ************************************ 00:05:21.769 START TEST rpc_plugins 00:05:21.769 ************************************ 00:05:21.769 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:21.769 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:21.769 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.769 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.769 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.769 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:21.769 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:21.769 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.769 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.769 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.769 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:21.769 { 00:05:21.769 "name": "Malloc1", 00:05:21.769 "aliases": [ 00:05:21.769 "b3baca74-f366-40ae-ab1a-a750c4a02cc0" 00:05:21.769 ], 00:05:21.769 "product_name": "Malloc disk", 00:05:21.769 "block_size": 4096, 00:05:21.769 "num_blocks": 256, 00:05:21.769 "uuid": "b3baca74-f366-40ae-ab1a-a750c4a02cc0", 00:05:21.769 "assigned_rate_limits": { 00:05:21.769 "rw_ios_per_sec": 0, 00:05:21.769 "rw_mbytes_per_sec": 0, 00:05:21.769 "r_mbytes_per_sec": 0, 00:05:21.769 "w_mbytes_per_sec": 0 00:05:21.769 }, 00:05:21.769 "claimed": false, 00:05:21.769 "zoned": false, 00:05:21.769 "supported_io_types": { 00:05:21.769 "read": true, 00:05:21.769 "write": true, 00:05:21.769 "unmap": true, 00:05:21.769 "flush": true, 00:05:21.769 "reset": true, 00:05:21.769 "nvme_admin": false, 00:05:21.769 "nvme_io": false, 00:05:21.769 "nvme_io_md": false, 00:05:21.769 "write_zeroes": true, 00:05:21.769 "zcopy": true, 00:05:21.769 "get_zone_info": false, 00:05:21.769 "zone_management": false, 00:05:21.769 "zone_append": false, 00:05:21.769 "compare": false, 00:05:21.769 "compare_and_write": false, 00:05:21.769 "abort": true, 00:05:21.769 "seek_hole": false, 00:05:21.769 "seek_data": false, 00:05:21.769 "copy": true, 00:05:21.769 "nvme_iov_md": false 00:05:21.769 }, 00:05:21.769 "memory_domains": [ 00:05:21.769 { 00:05:21.769 "dma_device_id": "system", 00:05:21.769 "dma_device_type": 1 00:05:21.769 }, 00:05:21.769 { 00:05:21.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.769 "dma_device_type": 2 00:05:21.769 } 00:05:21.769 ], 00:05:21.769 "driver_specific": {} 00:05:21.769 } 00:05:21.769 ]' 00:05:21.769 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:21.769 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:21.770 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:21.770 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.770 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.770 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.770 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:21.770 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.770 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.770 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.770 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:21.770 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:21.770 15:01:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:21.770 00:05:21.770 real 0m0.151s 00:05:21.770 user 0m0.096s 00:05:21.770 sys 0m0.019s 00:05:21.770 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.770 15:01:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.770 ************************************ 00:05:21.770 END TEST rpc_plugins 00:05:21.770 ************************************ 00:05:21.770 15:01:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:21.770 15:01:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.770 15:01:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.770 15:01:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.770 ************************************ 00:05:21.770 START TEST rpc_trace_cmd_test 00:05:21.770 ************************************ 00:05:21.770 15:01:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:21.770 15:01:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:21.770 15:01:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:21.770 15:01:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.770 15:01:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.030 15:01:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.030 15:01:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:22.030 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid33651", 00:05:22.030 "tpoint_group_mask": "0x8", 00:05:22.030 "iscsi_conn": { 00:05:22.030 "mask": "0x2", 00:05:22.030 "tpoint_mask": "0x0" 00:05:22.030 }, 00:05:22.030 "scsi": { 00:05:22.030 "mask": "0x4", 00:05:22.030 "tpoint_mask": "0x0" 00:05:22.030 }, 00:05:22.030 "bdev": { 00:05:22.030 "mask": "0x8", 00:05:22.030 "tpoint_mask": "0xffffffffffffffff" 00:05:22.030 }, 00:05:22.030 "nvmf_rdma": { 00:05:22.030 "mask": "0x10", 00:05:22.030 "tpoint_mask": "0x0" 00:05:22.030 }, 00:05:22.030 "nvmf_tcp": { 00:05:22.030 "mask": "0x20", 00:05:22.030 "tpoint_mask": "0x0" 00:05:22.030 }, 00:05:22.030 "ftl": { 00:05:22.030 "mask": "0x40", 00:05:22.030 "tpoint_mask": "0x0" 00:05:22.030 }, 00:05:22.030 "blobfs": { 00:05:22.030 "mask": "0x80", 00:05:22.030 "tpoint_mask": "0x0" 00:05:22.030 }, 00:05:22.030 "dsa": { 00:05:22.030 "mask": "0x200", 00:05:22.030 "tpoint_mask": "0x0" 00:05:22.030 }, 00:05:22.030 "thread": { 00:05:22.030 "mask": "0x400", 00:05:22.030 "tpoint_mask": "0x0" 00:05:22.030 }, 00:05:22.030 "nvme_pcie": { 00:05:22.030 "mask": "0x800", 00:05:22.030 "tpoint_mask": "0x0" 00:05:22.030 }, 00:05:22.030 "iaa": { 00:05:22.031 "mask": "0x1000", 00:05:22.031 "tpoint_mask": "0x0" 00:05:22.031 }, 00:05:22.031 "nvme_tcp": { 00:05:22.031 "mask": "0x2000", 00:05:22.031 "tpoint_mask": "0x0" 00:05:22.031 }, 00:05:22.031 "bdev_nvme": { 00:05:22.031 "mask": "0x4000", 00:05:22.031 "tpoint_mask": "0x0" 00:05:22.031 }, 00:05:22.031 "sock": { 00:05:22.031 "mask": "0x8000", 00:05:22.031 "tpoint_mask": "0x0" 00:05:22.031 } 00:05:22.031 }' 00:05:22.031 15:01:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:22.031 00:05:22.031 real 0m0.246s 00:05:22.031 user 0m0.206s 00:05:22.031 sys 0m0.031s 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.031 15:01:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.031 ************************************ 00:05:22.031 END TEST rpc_trace_cmd_test 00:05:22.031 ************************************ 00:05:22.292 15:01:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:22.292 15:01:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:22.292 15:01:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:22.292 15:01:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.292 15:01:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.292 15:01:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.292 ************************************ 00:05:22.292 START TEST rpc_daemon_integrity 00:05:22.292 ************************************ 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.292 { 00:05:22.292 "name": "Malloc2", 00:05:22.292 "aliases": [ 00:05:22.292 "39b07645-5a1f-4dc7-bf61-f380259107db" 00:05:22.292 ], 00:05:22.292 "product_name": "Malloc disk", 00:05:22.292 "block_size": 512, 00:05:22.292 "num_blocks": 16384, 00:05:22.292 "uuid": "39b07645-5a1f-4dc7-bf61-f380259107db", 00:05:22.292 "assigned_rate_limits": { 00:05:22.292 "rw_ios_per_sec": 0, 00:05:22.292 "rw_mbytes_per_sec": 0, 00:05:22.292 "r_mbytes_per_sec": 0, 00:05:22.292 "w_mbytes_per_sec": 0 00:05:22.292 }, 00:05:22.292 "claimed": false, 00:05:22.292 "zoned": false, 00:05:22.292 "supported_io_types": { 00:05:22.292 "read": true, 00:05:22.292 "write": true, 00:05:22.292 "unmap": true, 00:05:22.292 "flush": true, 00:05:22.292 "reset": true, 00:05:22.292 "nvme_admin": false, 00:05:22.292 "nvme_io": false, 00:05:22.292 "nvme_io_md": false, 00:05:22.292 "write_zeroes": true, 00:05:22.292 "zcopy": true, 00:05:22.292 "get_zone_info": false, 00:05:22.292 "zone_management": false, 00:05:22.292 "zone_append": false, 00:05:22.292 "compare": false, 00:05:22.292 "compare_and_write": false, 00:05:22.292 "abort": true, 00:05:22.292 "seek_hole": false, 00:05:22.292 "seek_data": false, 00:05:22.292 "copy": true, 00:05:22.292 "nvme_iov_md": false 00:05:22.292 }, 00:05:22.292 "memory_domains": [ 00:05:22.292 { 00:05:22.292 "dma_device_id": "system", 00:05:22.292 "dma_device_type": 1 00:05:22.292 }, 00:05:22.292 { 00:05:22.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.292 "dma_device_type": 2 00:05:22.292 } 00:05:22.292 ], 00:05:22.292 "driver_specific": {} 00:05:22.292 } 00:05:22.292 ]' 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:22.292 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.293 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.293 [2024-07-25 15:01:14.414600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:22.293 [2024-07-25 15:01:14.414630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.293 [2024-07-25 15:01:14.414642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8f3a90 00:05:22.293 [2024-07-25 15:01:14.414649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.293 [2024-07-25 15:01:14.415862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.293 [2024-07-25 15:01:14.415882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.293 Passthru0 00:05:22.293 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.293 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.293 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.293 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.293 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.293 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.293 { 00:05:22.293 "name": "Malloc2", 00:05:22.293 "aliases": [ 00:05:22.293 "39b07645-5a1f-4dc7-bf61-f380259107db" 00:05:22.293 ], 00:05:22.293 "product_name": "Malloc disk", 00:05:22.293 "block_size": 512, 00:05:22.293 "num_blocks": 16384, 00:05:22.293 "uuid": "39b07645-5a1f-4dc7-bf61-f380259107db", 00:05:22.293 "assigned_rate_limits": { 00:05:22.293 "rw_ios_per_sec": 0, 00:05:22.293 "rw_mbytes_per_sec": 0, 00:05:22.293 "r_mbytes_per_sec": 0, 00:05:22.293 "w_mbytes_per_sec": 0 00:05:22.293 }, 00:05:22.293 "claimed": true, 00:05:22.293 "claim_type": "exclusive_write", 00:05:22.293 "zoned": false, 00:05:22.293 "supported_io_types": { 00:05:22.293 "read": true, 00:05:22.293 "write": true, 00:05:22.293 "unmap": true, 00:05:22.293 "flush": true, 00:05:22.293 "reset": true, 00:05:22.293 "nvme_admin": false, 00:05:22.293 "nvme_io": false, 00:05:22.293 "nvme_io_md": false, 00:05:22.293 "write_zeroes": true, 00:05:22.293 "zcopy": true, 00:05:22.293 "get_zone_info": false, 00:05:22.293 "zone_management": false, 00:05:22.293 "zone_append": false, 00:05:22.293 "compare": false, 00:05:22.293 "compare_and_write": false, 00:05:22.293 "abort": true, 00:05:22.293 "seek_hole": false, 00:05:22.293 "seek_data": false, 00:05:22.293 "copy": true, 00:05:22.293 "nvme_iov_md": false 00:05:22.293 }, 00:05:22.293 "memory_domains": [ 00:05:22.293 { 00:05:22.293 "dma_device_id": "system", 00:05:22.293 "dma_device_type": 1 00:05:22.293 }, 00:05:22.293 { 00:05:22.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.293 "dma_device_type": 2 00:05:22.293 } 00:05:22.293 ], 00:05:22.293 "driver_specific": {} 00:05:22.293 }, 00:05:22.293 { 00:05:22.293 "name": "Passthru0", 00:05:22.293 "aliases": [ 00:05:22.293 "5e8b6d7a-d2d5-5593-ae76-d81e47db1510" 00:05:22.293 ], 00:05:22.293 "product_name": "passthru", 00:05:22.293 "block_size": 512, 00:05:22.293 "num_blocks": 16384, 00:05:22.293 "uuid": "5e8b6d7a-d2d5-5593-ae76-d81e47db1510", 00:05:22.293 "assigned_rate_limits": { 00:05:22.293 "rw_ios_per_sec": 0, 00:05:22.293 "rw_mbytes_per_sec": 0, 00:05:22.293 "r_mbytes_per_sec": 0, 00:05:22.293 "w_mbytes_per_sec": 0 00:05:22.293 }, 00:05:22.293 "claimed": false, 00:05:22.293 "zoned": false, 00:05:22.293 "supported_io_types": { 00:05:22.293 "read": true, 00:05:22.293 "write": true, 00:05:22.293 "unmap": true, 00:05:22.293 "flush": true, 00:05:22.293 "reset": true, 00:05:22.293 "nvme_admin": false, 00:05:22.293 "nvme_io": false, 00:05:22.293 "nvme_io_md": false, 00:05:22.293 "write_zeroes": true, 00:05:22.293 "zcopy": true, 00:05:22.293 "get_zone_info": false, 00:05:22.293 "zone_management": false, 00:05:22.293 "zone_append": false, 00:05:22.293 "compare": false, 00:05:22.293 "compare_and_write": false, 00:05:22.293 "abort": true, 00:05:22.293 "seek_hole": false, 00:05:22.293 "seek_data": false, 00:05:22.293 "copy": true, 00:05:22.293 "nvme_iov_md": false 00:05:22.293 }, 00:05:22.293 "memory_domains": [ 00:05:22.293 { 00:05:22.293 "dma_device_id": "system", 00:05:22.293 "dma_device_type": 1 00:05:22.293 }, 00:05:22.293 { 00:05:22.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.293 "dma_device_type": 2 00:05:22.293 } 00:05:22.293 ], 00:05:22.293 "driver_specific": { 00:05:22.293 "passthru": { 00:05:22.293 "name": "Passthru0", 00:05:22.293 "base_bdev_name": "Malloc2" 00:05:22.293 } 00:05:22.293 } 00:05:22.293 } 00:05:22.293 ]' 00:05:22.293 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.555 00:05:22.555 real 0m0.298s 00:05:22.555 user 0m0.190s 00:05:22.555 sys 0m0.039s 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.555 15:01:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.555 ************************************ 00:05:22.555 END TEST rpc_daemon_integrity 00:05:22.555 ************************************ 00:05:22.555 15:01:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:22.555 15:01:14 rpc -- rpc/rpc.sh@84 -- # killprocess 33651 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@950 -- # '[' -z 33651 ']' 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@954 -- # kill -0 33651 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@955 -- # uname 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 33651 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 33651' 00:05:22.555 killing process with pid 33651 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@969 -- # kill 33651 00:05:22.555 15:01:14 rpc -- common/autotest_common.sh@974 -- # wait 33651 00:05:22.819 00:05:22.819 real 0m2.498s 00:05:22.819 user 0m3.295s 00:05:22.819 sys 0m0.706s 00:05:22.819 15:01:14 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.819 15:01:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.819 ************************************ 00:05:22.819 END TEST rpc 00:05:22.819 ************************************ 00:05:22.819 15:01:14 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:22.819 15:01:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.819 15:01:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.819 15:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:22.819 ************************************ 00:05:22.819 START TEST skip_rpc 00:05:22.819 ************************************ 00:05:22.819 15:01:14 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.140 * Looking for test storage... 00:05:23.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.140 15:01:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:23.140 15:01:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:23.140 15:01:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:23.140 15:01:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.140 15:01:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.140 15:01:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.140 ************************************ 00:05:23.140 START TEST skip_rpc 00:05:23.140 ************************************ 00:05:23.140 15:01:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:23.140 15:01:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=34189 00:05:23.140 15:01:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.140 15:01:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:23.140 15:01:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:23.140 [2024-07-25 15:01:15.139947] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:23.140 [2024-07-25 15:01:15.140004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid34189 ] 00:05:23.140 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.140 [2024-07-25 15:01:15.201829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.140 [2024-07-25 15:01:15.271817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 34189 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 34189 ']' 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 34189 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 34189 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 34189' 00:05:28.427 killing process with pid 34189 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 34189 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 34189 00:05:28.427 00:05:28.427 real 0m5.279s 00:05:28.427 user 0m5.080s 00:05:28.427 sys 0m0.229s 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.427 15:01:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.427 ************************************ 00:05:28.427 END TEST skip_rpc 00:05:28.427 ************************************ 00:05:28.427 15:01:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:28.427 15:01:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.427 15:01:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.427 15:01:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.427 ************************************ 00:05:28.427 START TEST skip_rpc_with_json 00:05:28.427 ************************************ 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=35360 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 35360 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 35360 ']' 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.427 15:01:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.427 [2024-07-25 15:01:20.497599] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:28.427 [2024-07-25 15:01:20.497655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35360 ] 00:05:28.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.427 [2024-07-25 15:01:20.557942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.687 [2024-07-25 15:01:20.627090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.258 [2024-07-25 15:01:21.259914] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:29.258 request: 00:05:29.258 { 00:05:29.258 "trtype": "tcp", 00:05:29.258 "method": "nvmf_get_transports", 00:05:29.258 "req_id": 1 00:05:29.258 } 00:05:29.258 Got JSON-RPC error response 00:05:29.258 response: 00:05:29.258 { 00:05:29.258 "code": -19, 00:05:29.258 "message": "No such device" 00:05:29.258 } 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.258 [2024-07-25 15:01:21.272034] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.258 15:01:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:29.258 { 00:05:29.258 "subsystems": [ 00:05:29.258 { 00:05:29.258 "subsystem": "vfio_user_target", 00:05:29.258 "config": null 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "subsystem": "keyring", 00:05:29.258 "config": [] 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "subsystem": "iobuf", 00:05:29.258 "config": [ 00:05:29.258 { 00:05:29.258 "method": "iobuf_set_options", 00:05:29.258 "params": { 00:05:29.258 "small_pool_count": 8192, 00:05:29.258 "large_pool_count": 1024, 00:05:29.258 "small_bufsize": 8192, 00:05:29.258 "large_bufsize": 135168 00:05:29.258 } 00:05:29.258 } 00:05:29.258 ] 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "subsystem": "sock", 00:05:29.258 "config": [ 00:05:29.258 { 00:05:29.258 "method": "sock_set_default_impl", 00:05:29.258 "params": { 00:05:29.258 "impl_name": "posix" 00:05:29.258 } 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "method": "sock_impl_set_options", 00:05:29.258 "params": { 00:05:29.258 "impl_name": "ssl", 00:05:29.258 "recv_buf_size": 4096, 00:05:29.258 "send_buf_size": 4096, 00:05:29.258 "enable_recv_pipe": true, 00:05:29.258 "enable_quickack": false, 00:05:29.258 "enable_placement_id": 0, 00:05:29.258 "enable_zerocopy_send_server": true, 00:05:29.258 "enable_zerocopy_send_client": false, 00:05:29.258 "zerocopy_threshold": 0, 00:05:29.258 "tls_version": 0, 00:05:29.258 "enable_ktls": false 00:05:29.258 } 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "method": "sock_impl_set_options", 00:05:29.258 "params": { 00:05:29.258 "impl_name": "posix", 00:05:29.258 "recv_buf_size": 2097152, 00:05:29.258 "send_buf_size": 2097152, 00:05:29.258 "enable_recv_pipe": true, 00:05:29.258 "enable_quickack": false, 00:05:29.258 "enable_placement_id": 0, 00:05:29.258 "enable_zerocopy_send_server": true, 00:05:29.258 "enable_zerocopy_send_client": false, 00:05:29.258 "zerocopy_threshold": 0, 00:05:29.258 "tls_version": 0, 00:05:29.258 "enable_ktls": false 00:05:29.258 } 00:05:29.258 } 00:05:29.258 ] 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "subsystem": "vmd", 00:05:29.258 "config": [] 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "subsystem": "accel", 00:05:29.258 "config": [ 00:05:29.258 { 00:05:29.258 "method": "accel_set_options", 00:05:29.258 "params": { 00:05:29.258 "small_cache_size": 128, 00:05:29.258 "large_cache_size": 16, 00:05:29.258 "task_count": 2048, 00:05:29.258 "sequence_count": 2048, 00:05:29.258 "buf_count": 2048 00:05:29.258 } 00:05:29.258 } 00:05:29.258 ] 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "subsystem": "bdev", 00:05:29.258 "config": [ 00:05:29.258 { 00:05:29.258 "method": "bdev_set_options", 00:05:29.258 "params": { 00:05:29.258 "bdev_io_pool_size": 65535, 00:05:29.258 "bdev_io_cache_size": 256, 00:05:29.258 "bdev_auto_examine": true, 00:05:29.258 "iobuf_small_cache_size": 128, 00:05:29.258 "iobuf_large_cache_size": 16 00:05:29.258 } 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "method": "bdev_raid_set_options", 00:05:29.258 "params": { 00:05:29.258 "process_window_size_kb": 1024, 00:05:29.258 "process_max_bandwidth_mb_sec": 0 00:05:29.258 } 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "method": "bdev_iscsi_set_options", 00:05:29.258 "params": { 00:05:29.258 "timeout_sec": 30 00:05:29.258 } 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "method": "bdev_nvme_set_options", 00:05:29.258 "params": { 00:05:29.258 "action_on_timeout": "none", 00:05:29.258 "timeout_us": 0, 00:05:29.258 "timeout_admin_us": 0, 00:05:29.258 "keep_alive_timeout_ms": 10000, 00:05:29.258 "arbitration_burst": 0, 00:05:29.258 "low_priority_weight": 0, 00:05:29.258 "medium_priority_weight": 0, 00:05:29.258 "high_priority_weight": 0, 00:05:29.258 "nvme_adminq_poll_period_us": 10000, 00:05:29.258 "nvme_ioq_poll_period_us": 0, 00:05:29.258 "io_queue_requests": 0, 00:05:29.258 "delay_cmd_submit": true, 00:05:29.258 "transport_retry_count": 4, 00:05:29.258 "bdev_retry_count": 3, 00:05:29.258 "transport_ack_timeout": 0, 00:05:29.258 "ctrlr_loss_timeout_sec": 0, 00:05:29.258 "reconnect_delay_sec": 0, 00:05:29.258 "fast_io_fail_timeout_sec": 0, 00:05:29.258 "disable_auto_failback": false, 00:05:29.258 "generate_uuids": false, 00:05:29.258 "transport_tos": 0, 00:05:29.258 "nvme_error_stat": false, 00:05:29.258 "rdma_srq_size": 0, 00:05:29.258 "io_path_stat": false, 00:05:29.258 "allow_accel_sequence": false, 00:05:29.258 "rdma_max_cq_size": 0, 00:05:29.258 "rdma_cm_event_timeout_ms": 0, 00:05:29.258 "dhchap_digests": [ 00:05:29.258 "sha256", 00:05:29.258 "sha384", 00:05:29.258 "sha512" 00:05:29.258 ], 00:05:29.258 "dhchap_dhgroups": [ 00:05:29.258 "null", 00:05:29.258 "ffdhe2048", 00:05:29.258 "ffdhe3072", 00:05:29.258 "ffdhe4096", 00:05:29.258 "ffdhe6144", 00:05:29.258 "ffdhe8192" 00:05:29.258 ] 00:05:29.258 } 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "method": "bdev_nvme_set_hotplug", 00:05:29.258 "params": { 00:05:29.258 "period_us": 100000, 00:05:29.258 "enable": false 00:05:29.258 } 00:05:29.258 }, 00:05:29.258 { 00:05:29.258 "method": "bdev_wait_for_examine" 00:05:29.258 } 00:05:29.259 ] 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "subsystem": "scsi", 00:05:29.259 "config": null 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "subsystem": "scheduler", 00:05:29.259 "config": [ 00:05:29.259 { 00:05:29.259 "method": "framework_set_scheduler", 00:05:29.259 "params": { 00:05:29.259 "name": "static" 00:05:29.259 } 00:05:29.259 } 00:05:29.259 ] 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "subsystem": "vhost_scsi", 00:05:29.259 "config": [] 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "subsystem": "vhost_blk", 00:05:29.259 "config": [] 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "subsystem": "ublk", 00:05:29.259 "config": [] 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "subsystem": "nbd", 00:05:29.259 "config": [] 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "subsystem": "nvmf", 00:05:29.259 "config": [ 00:05:29.259 { 00:05:29.259 "method": "nvmf_set_config", 00:05:29.259 "params": { 00:05:29.259 "discovery_filter": "match_any", 00:05:29.259 "admin_cmd_passthru": { 00:05:29.259 "identify_ctrlr": false 00:05:29.259 } 00:05:29.259 } 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "method": "nvmf_set_max_subsystems", 00:05:29.259 "params": { 00:05:29.259 "max_subsystems": 1024 00:05:29.259 } 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "method": "nvmf_set_crdt", 00:05:29.259 "params": { 00:05:29.259 "crdt1": 0, 00:05:29.259 "crdt2": 0, 00:05:29.259 "crdt3": 0 00:05:29.259 } 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "method": "nvmf_create_transport", 00:05:29.259 "params": { 00:05:29.259 "trtype": "TCP", 00:05:29.259 "max_queue_depth": 128, 00:05:29.259 "max_io_qpairs_per_ctrlr": 127, 00:05:29.259 "in_capsule_data_size": 4096, 00:05:29.259 "max_io_size": 131072, 00:05:29.259 "io_unit_size": 131072, 00:05:29.259 "max_aq_depth": 128, 00:05:29.259 "num_shared_buffers": 511, 00:05:29.259 "buf_cache_size": 4294967295, 00:05:29.259 "dif_insert_or_strip": false, 00:05:29.259 "zcopy": false, 00:05:29.259 "c2h_success": true, 00:05:29.259 "sock_priority": 0, 00:05:29.259 "abort_timeout_sec": 1, 00:05:29.259 "ack_timeout": 0, 00:05:29.259 "data_wr_pool_size": 0 00:05:29.259 } 00:05:29.259 } 00:05:29.259 ] 00:05:29.259 }, 00:05:29.259 { 00:05:29.259 "subsystem": "iscsi", 00:05:29.259 "config": [ 00:05:29.259 { 00:05:29.259 "method": "iscsi_set_options", 00:05:29.259 "params": { 00:05:29.259 "node_base": "iqn.2016-06.io.spdk", 00:05:29.259 "max_sessions": 128, 00:05:29.259 "max_connections_per_session": 2, 00:05:29.259 "max_queue_depth": 64, 00:05:29.259 "default_time2wait": 2, 00:05:29.259 "default_time2retain": 20, 00:05:29.259 "first_burst_length": 8192, 00:05:29.259 "immediate_data": true, 00:05:29.259 "allow_duplicated_isid": false, 00:05:29.259 "error_recovery_level": 0, 00:05:29.259 "nop_timeout": 60, 00:05:29.259 "nop_in_interval": 30, 00:05:29.259 "disable_chap": false, 00:05:29.259 "require_chap": false, 00:05:29.259 "mutual_chap": false, 00:05:29.259 "chap_group": 0, 00:05:29.259 "max_large_datain_per_connection": 64, 00:05:29.259 "max_r2t_per_connection": 4, 00:05:29.259 "pdu_pool_size": 36864, 00:05:29.259 "immediate_data_pool_size": 16384, 00:05:29.259 "data_out_pool_size": 2048 00:05:29.259 } 00:05:29.259 } 00:05:29.259 ] 00:05:29.259 } 00:05:29.259 ] 00:05:29.259 } 00:05:29.259 15:01:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:29.259 15:01:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 35360 00:05:29.259 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 35360 ']' 00:05:29.259 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 35360 00:05:29.259 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:29.259 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.259 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 35360 00:05:29.520 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.520 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.520 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 35360' 00:05:29.520 killing process with pid 35360 00:05:29.520 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 35360 00:05:29.520 15:01:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 35360 00:05:29.520 15:01:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=35556 00:05:29.520 15:01:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:29.520 15:01:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 35556 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 35556 ']' 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 35556 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 35556 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 35556' 00:05:34.807 killing process with pid 35556 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 35556 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 35556 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:34.807 00:05:34.807 real 0m6.534s 00:05:34.807 user 0m6.406s 00:05:34.807 sys 0m0.525s 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.807 15:01:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.807 ************************************ 00:05:34.807 END TEST skip_rpc_with_json 00:05:34.807 ************************************ 00:05:35.068 15:01:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:35.068 15:01:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.068 15:01:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.068 15:01:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.068 ************************************ 00:05:35.068 START TEST skip_rpc_with_delay 00:05:35.068 ************************************ 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.068 [2024-07-25 15:01:27.112009] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:35.068 [2024-07-25 15:01:27.112096] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.068 00:05:35.068 real 0m0.077s 00:05:35.068 user 0m0.051s 00:05:35.068 sys 0m0.026s 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.068 15:01:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:35.068 ************************************ 00:05:35.068 END TEST skip_rpc_with_delay 00:05:35.068 ************************************ 00:05:35.069 15:01:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:35.069 15:01:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:35.069 15:01:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:35.069 15:01:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.069 15:01:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.069 15:01:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.069 ************************************ 00:05:35.069 START TEST exit_on_failed_rpc_init 00:05:35.069 ************************************ 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=36758 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 36758 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 36758 ']' 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.069 15:01:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.330 [2024-07-25 15:01:27.269041] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:35.330 [2024-07-25 15:01:27.269102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid36758 ] 00:05:35.330 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.330 [2024-07-25 15:01:27.336087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.330 [2024-07-25 15:01:27.410845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:35.902 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.163 [2024-07-25 15:01:28.115252] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:36.163 [2024-07-25 15:01:28.115300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid36954 ] 00:05:36.163 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.163 [2024-07-25 15:01:28.191500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.163 [2024-07-25 15:01:28.255356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.163 [2024-07-25 15:01:28.255418] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:36.163 [2024-07-25 15:01:28.255430] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:36.163 [2024-07-25 15:01:28.255436] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 36758 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 36758 ']' 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 36758 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.163 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 36758 00:05:36.423 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.423 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.423 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 36758' 00:05:36.423 killing process with pid 36758 00:05:36.423 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 36758 00:05:36.423 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 36758 00:05:36.423 00:05:36.423 real 0m1.368s 00:05:36.423 user 0m1.603s 00:05:36.423 sys 0m0.382s 00:05:36.423 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.424 15:01:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.424 ************************************ 00:05:36.424 END TEST exit_on_failed_rpc_init 00:05:36.424 ************************************ 00:05:36.424 15:01:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:36.685 00:05:36.685 real 0m13.669s 00:05:36.685 user 0m13.294s 00:05:36.685 sys 0m1.442s 00:05:36.685 15:01:28 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.685 15:01:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.685 ************************************ 00:05:36.685 END TEST skip_rpc 00:05:36.685 ************************************ 00:05:36.685 15:01:28 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.685 15:01:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.685 15:01:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.685 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.685 ************************************ 00:05:36.685 START TEST rpc_client 00:05:36.685 ************************************ 00:05:36.685 15:01:28 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.685 * Looking for test storage... 00:05:36.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:36.685 15:01:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:36.685 OK 00:05:36.685 15:01:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:36.685 00:05:36.685 real 0m0.100s 00:05:36.685 user 0m0.034s 00:05:36.685 sys 0m0.072s 00:05:36.685 15:01:28 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.685 15:01:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:36.685 ************************************ 00:05:36.685 END TEST rpc_client 00:05:36.685 ************************************ 00:05:36.685 15:01:28 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.685 15:01:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.685 15:01:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.685 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.685 ************************************ 00:05:36.685 START TEST json_config 00:05:36.685 ************************************ 00:05:36.685 15:01:28 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.948 15:01:28 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.948 15:01:28 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.948 15:01:28 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.948 15:01:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.948 15:01:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.948 15:01:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.948 15:01:28 json_config -- paths/export.sh@5 -- # export PATH 00:05:36.948 15:01:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@47 -- # : 0 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:36.948 15:01:28 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:36.948 INFO: JSON configuration test init 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.948 15:01:28 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:36.948 15:01:28 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.948 15:01:28 json_config -- json_config/common.sh@10 -- # shift 00:05:36.948 15:01:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.948 15:01:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.948 15:01:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.948 15:01:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.948 15:01:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.948 15:01:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=37243 00:05:36.948 15:01:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.948 Waiting for target to run... 00:05:36.948 15:01:28 json_config -- json_config/common.sh@25 -- # waitforlisten 37243 /var/tmp/spdk_tgt.sock 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@831 -- # '[' -z 37243 ']' 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.948 15:01:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.948 15:01:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.948 [2024-07-25 15:01:29.048470] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:36.949 [2024-07-25 15:01:29.048541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid37243 ] 00:05:36.949 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.217 [2024-07-25 15:01:29.406446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.478 [2024-07-25 15:01:29.457943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.739 15:01:29 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.739 15:01:29 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:37.739 15:01:29 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.739 00:05:37.739 15:01:29 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:37.739 15:01:29 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:37.739 15:01:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.739 15:01:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.739 15:01:29 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:37.739 15:01:29 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:37.739 15:01:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.739 15:01:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.739 15:01:29 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:37.739 15:01:29 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:37.739 15:01:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:38.310 15:01:30 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:38.310 15:01:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:38.311 15:01:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:38.311 15:01:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.311 15:01:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:38.311 15:01:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:38.311 15:01:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:38.311 15:01:30 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:38.311 15:01:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:38.311 15:01:30 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:38.571 15:01:30 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:38.571 15:01:30 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:38.571 15:01:30 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:38.571 15:01:30 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:38.571 15:01:30 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:38.571 15:01:30 json_config -- json_config/json_config.sh@51 -- # sort 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:38.572 15:01:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.572 15:01:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:38.572 15:01:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:38.572 15:01:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:38.572 15:01:30 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.572 15:01:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.833 MallocForNvmf0 00:05:38.833 15:01:30 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.833 15:01:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.833 MallocForNvmf1 00:05:38.833 15:01:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:38.833 15:01:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:39.093 [2024-07-25 15:01:31.112231] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.094 15:01:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.094 15:01:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.354 15:01:31 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.354 15:01:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.354 15:01:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.354 15:01:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.613 15:01:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:39.613 15:01:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:39.613 [2024-07-25 15:01:31.766328] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:39.613 15:01:31 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:39.613 15:01:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.613 15:01:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.874 15:01:31 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:39.874 15:01:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.874 15:01:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.874 15:01:31 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:39.874 15:01:31 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.874 15:01:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.874 MallocBdevForConfigChangeCheck 00:05:39.874 15:01:32 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:39.874 15:01:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.874 15:01:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.135 15:01:32 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:40.135 15:01:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.395 15:01:32 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:40.395 INFO: shutting down applications... 00:05:40.395 15:01:32 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:40.395 15:01:32 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:40.395 15:01:32 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:40.395 15:01:32 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:40.656 Calling clear_iscsi_subsystem 00:05:40.656 Calling clear_nvmf_subsystem 00:05:40.656 Calling clear_nbd_subsystem 00:05:40.656 Calling clear_ublk_subsystem 00:05:40.656 Calling clear_vhost_blk_subsystem 00:05:40.656 Calling clear_vhost_scsi_subsystem 00:05:40.656 Calling clear_bdev_subsystem 00:05:40.656 15:01:32 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:40.656 15:01:32 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:40.656 15:01:32 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:40.656 15:01:32 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.656 15:01:32 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:40.656 15:01:32 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:40.916 15:01:33 json_config -- json_config/json_config.sh@349 -- # break 00:05:40.916 15:01:33 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:40.916 15:01:33 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:40.916 15:01:33 json_config -- json_config/common.sh@31 -- # local app=target 00:05:40.916 15:01:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.916 15:01:33 json_config -- json_config/common.sh@35 -- # [[ -n 37243 ]] 00:05:40.916 15:01:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 37243 00:05:40.916 15:01:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.916 15:01:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.916 15:01:33 json_config -- json_config/common.sh@41 -- # kill -0 37243 00:05:40.916 15:01:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.488 15:01:33 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.488 15:01:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.488 15:01:33 json_config -- json_config/common.sh@41 -- # kill -0 37243 00:05:41.488 15:01:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.488 15:01:33 json_config -- json_config/common.sh@43 -- # break 00:05:41.488 15:01:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.488 15:01:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.488 SPDK target shutdown done 00:05:41.488 15:01:33 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:41.488 INFO: relaunching applications... 00:05:41.488 15:01:33 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.488 15:01:33 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.488 15:01:33 json_config -- json_config/common.sh@10 -- # shift 00:05:41.488 15:01:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.488 15:01:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.488 15:01:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.488 15:01:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.488 15:01:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.488 15:01:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=38207 00:05:41.488 15:01:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.488 Waiting for target to run... 00:05:41.488 15:01:33 json_config -- json_config/common.sh@25 -- # waitforlisten 38207 /var/tmp/spdk_tgt.sock 00:05:41.488 15:01:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.488 15:01:33 json_config -- common/autotest_common.sh@831 -- # '[' -z 38207 ']' 00:05:41.488 15:01:33 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.488 15:01:33 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.488 15:01:33 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.488 15:01:33 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.488 15:01:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.488 [2024-07-25 15:01:33.664448] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:41.488 [2024-07-25 15:01:33.664517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38207 ] 00:05:41.750 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.009 [2024-07-25 15:01:34.079056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.009 [2024-07-25 15:01:34.141137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.580 [2024-07-25 15:01:34.634166] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.580 [2024-07-25 15:01:34.666541] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:42.580 15:01:34 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.580 15:01:34 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:42.580 15:01:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:42.580 00:05:42.580 15:01:34 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:42.580 15:01:34 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:42.580 INFO: Checking if target configuration is the same... 00:05:42.580 15:01:34 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.580 15:01:34 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:42.580 15:01:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.580 + '[' 2 -ne 2 ']' 00:05:42.580 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.580 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:42.580 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.580 +++ basename /dev/fd/62 00:05:42.580 ++ mktemp /tmp/62.XXX 00:05:42.580 + tmp_file_1=/tmp/62.Rbn 00:05:42.580 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.580 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.580 + tmp_file_2=/tmp/spdk_tgt_config.json.Tng 00:05:42.580 + ret=0 00:05:42.580 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.840 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.101 + diff -u /tmp/62.Rbn /tmp/spdk_tgt_config.json.Tng 00:05:43.101 + echo 'INFO: JSON config files are the same' 00:05:43.101 INFO: JSON config files are the same 00:05:43.101 + rm /tmp/62.Rbn /tmp/spdk_tgt_config.json.Tng 00:05:43.101 + exit 0 00:05:43.101 15:01:35 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:43.101 15:01:35 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:43.101 INFO: changing configuration and checking if this can be detected... 00:05:43.101 15:01:35 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:43.101 15:01:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:43.101 15:01:35 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.101 15:01:35 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:43.101 15:01:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.101 + '[' 2 -ne 2 ']' 00:05:43.101 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:43.101 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:43.101 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.101 +++ basename /dev/fd/62 00:05:43.101 ++ mktemp /tmp/62.XXX 00:05:43.101 + tmp_file_1=/tmp/62.HH0 00:05:43.101 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.101 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:43.101 + tmp_file_2=/tmp/spdk_tgt_config.json.p8s 00:05:43.101 + ret=0 00:05:43.101 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.384 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.384 + diff -u /tmp/62.HH0 /tmp/spdk_tgt_config.json.p8s 00:05:43.644 + ret=1 00:05:43.644 + echo '=== Start of file: /tmp/62.HH0 ===' 00:05:43.644 + cat /tmp/62.HH0 00:05:43.644 + echo '=== End of file: /tmp/62.HH0 ===' 00:05:43.644 + echo '' 00:05:43.644 + echo '=== Start of file: /tmp/spdk_tgt_config.json.p8s ===' 00:05:43.644 + cat /tmp/spdk_tgt_config.json.p8s 00:05:43.644 + echo '=== End of file: /tmp/spdk_tgt_config.json.p8s ===' 00:05:43.644 + echo '' 00:05:43.644 + rm /tmp/62.HH0 /tmp/spdk_tgt_config.json.p8s 00:05:43.644 + exit 1 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:43.644 INFO: configuration change detected. 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@321 -- # [[ -n 38207 ]] 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.644 15:01:35 json_config -- json_config/json_config.sh@327 -- # killprocess 38207 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@950 -- # '[' -z 38207 ']' 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@954 -- # kill -0 38207 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@955 -- # uname 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 38207 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 38207' 00:05:43.644 killing process with pid 38207 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@969 -- # kill 38207 00:05:43.644 15:01:35 json_config -- common/autotest_common.sh@974 -- # wait 38207 00:05:43.904 15:01:35 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.904 15:01:35 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:43.904 15:01:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.904 15:01:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.904 15:01:36 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:43.904 15:01:36 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:43.904 INFO: Success 00:05:43.904 00:05:43.904 real 0m7.163s 00:05:43.904 user 0m8.492s 00:05:43.904 sys 0m1.927s 00:05:43.904 15:01:36 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.904 15:01:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.904 ************************************ 00:05:43.904 END TEST json_config 00:05:43.904 ************************************ 00:05:43.904 15:01:36 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.904 15:01:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.904 15:01:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.904 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.166 ************************************ 00:05:44.166 START TEST json_config_extra_key 00:05:44.166 ************************************ 00:05:44.166 15:01:36 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.166 15:01:36 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.166 15:01:36 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.166 15:01:36 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.166 15:01:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.166 15:01:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.166 15:01:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.166 15:01:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:44.166 15:01:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:44.166 15:01:36 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:44.166 INFO: launching applications... 00:05:44.166 15:01:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=38986 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.166 Waiting for target to run... 00:05:44.166 15:01:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 38986 /var/tmp/spdk_tgt.sock 00:05:44.166 15:01:36 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 38986 ']' 00:05:44.167 15:01:36 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.167 15:01:36 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.167 15:01:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:44.167 15:01:36 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.167 15:01:36 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.167 15:01:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.167 [2024-07-25 15:01:36.266936] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:44.167 [2024-07-25 15:01:36.267006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38986 ] 00:05:44.167 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.428 [2024-07-25 15:01:36.610608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.688 [2024-07-25 15:01:36.662697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.949 15:01:37 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.949 15:01:37 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:44.949 15:01:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.949 00:05:44.949 15:01:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.949 INFO: shutting down applications... 00:05:44.949 15:01:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.949 15:01:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.949 15:01:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.949 15:01:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 38986 ]] 00:05:44.949 15:01:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 38986 00:05:44.949 15:01:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.949 15:01:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.949 15:01:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 38986 00:05:44.949 15:01:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.521 15:01:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.521 15:01:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.521 15:01:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 38986 00:05:45.521 15:01:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:45.521 15:01:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:45.521 15:01:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:45.521 15:01:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:45.521 SPDK target shutdown done 00:05:45.521 15:01:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:45.521 Success 00:05:45.521 00:05:45.521 real 0m1.452s 00:05:45.521 user 0m1.028s 00:05:45.521 sys 0m0.457s 00:05:45.521 15:01:37 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.521 15:01:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.521 ************************************ 00:05:45.521 END TEST json_config_extra_key 00:05:45.521 ************************************ 00:05:45.521 15:01:37 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.521 15:01:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.521 15:01:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.521 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.521 ************************************ 00:05:45.521 START TEST alias_rpc 00:05:45.521 ************************************ 00:05:45.521 15:01:37 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.782 * Looking for test storage... 00:05:45.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:45.782 15:01:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.782 15:01:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=39345 00:05:45.782 15:01:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 39345 00:05:45.782 15:01:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.782 15:01:37 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 39345 ']' 00:05:45.782 15:01:37 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.782 15:01:37 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.782 15:01:37 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.782 15:01:37 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.782 15:01:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.782 [2024-07-25 15:01:37.800129] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:45.782 [2024-07-25 15:01:37.800215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39345 ] 00:05:45.782 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.782 [2024-07-25 15:01:37.866939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.782 [2024-07-25 15:01:37.940255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.724 15:01:38 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.724 15:01:38 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.724 15:01:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:46.725 15:01:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 39345 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 39345 ']' 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 39345 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 39345 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 39345' 00:05:46.725 killing process with pid 39345 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@969 -- # kill 39345 00:05:46.725 15:01:38 alias_rpc -- common/autotest_common.sh@974 -- # wait 39345 00:05:46.985 00:05:46.986 real 0m1.399s 00:05:46.986 user 0m1.544s 00:05:46.986 sys 0m0.385s 00:05:46.986 15:01:39 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.986 15:01:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.986 ************************************ 00:05:46.986 END TEST alias_rpc 00:05:46.986 ************************************ 00:05:46.986 15:01:39 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:46.986 15:01:39 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.986 15:01:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.986 15:01:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.986 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.986 ************************************ 00:05:46.986 START TEST spdkcli_tcp 00:05:46.986 ************************************ 00:05:46.986 15:01:39 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:47.247 * Looking for test storage... 00:05:47.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:47.247 15:01:39 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.247 15:01:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=39617 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 39617 00:05:47.247 15:01:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:47.247 15:01:39 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 39617 ']' 00:05:47.247 15:01:39 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.247 15:01:39 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.247 15:01:39 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.247 15:01:39 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.247 15:01:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.247 [2024-07-25 15:01:39.279166] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:47.247 [2024-07-25 15:01:39.279243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39617 ] 00:05:47.247 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.247 [2024-07-25 15:01:39.346029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.247 [2024-07-25 15:01:39.423154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.247 [2024-07-25 15:01:39.423156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.190 15:01:40 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.190 15:01:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:48.190 15:01:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=39774 00:05:48.190 15:01:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:48.190 15:01:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:48.190 [ 00:05:48.190 "bdev_malloc_delete", 00:05:48.190 "bdev_malloc_create", 00:05:48.190 "bdev_null_resize", 00:05:48.190 "bdev_null_delete", 00:05:48.190 "bdev_null_create", 00:05:48.190 "bdev_nvme_cuse_unregister", 00:05:48.190 "bdev_nvme_cuse_register", 00:05:48.190 "bdev_opal_new_user", 00:05:48.190 "bdev_opal_set_lock_state", 00:05:48.190 "bdev_opal_delete", 00:05:48.190 "bdev_opal_get_info", 00:05:48.190 "bdev_opal_create", 00:05:48.190 "bdev_nvme_opal_revert", 00:05:48.190 "bdev_nvme_opal_init", 00:05:48.190 "bdev_nvme_send_cmd", 00:05:48.190 "bdev_nvme_get_path_iostat", 00:05:48.190 "bdev_nvme_get_mdns_discovery_info", 00:05:48.190 "bdev_nvme_stop_mdns_discovery", 00:05:48.190 "bdev_nvme_start_mdns_discovery", 00:05:48.190 "bdev_nvme_set_multipath_policy", 00:05:48.190 "bdev_nvme_set_preferred_path", 00:05:48.190 "bdev_nvme_get_io_paths", 00:05:48.190 "bdev_nvme_remove_error_injection", 00:05:48.190 "bdev_nvme_add_error_injection", 00:05:48.190 "bdev_nvme_get_discovery_info", 00:05:48.190 "bdev_nvme_stop_discovery", 00:05:48.190 "bdev_nvme_start_discovery", 00:05:48.190 "bdev_nvme_get_controller_health_info", 00:05:48.190 "bdev_nvme_disable_controller", 00:05:48.190 "bdev_nvme_enable_controller", 00:05:48.190 "bdev_nvme_reset_controller", 00:05:48.190 "bdev_nvme_get_transport_statistics", 00:05:48.190 "bdev_nvme_apply_firmware", 00:05:48.190 "bdev_nvme_detach_controller", 00:05:48.190 "bdev_nvme_get_controllers", 00:05:48.190 "bdev_nvme_attach_controller", 00:05:48.190 "bdev_nvme_set_hotplug", 00:05:48.190 "bdev_nvme_set_options", 00:05:48.190 "bdev_passthru_delete", 00:05:48.190 "bdev_passthru_create", 00:05:48.190 "bdev_lvol_set_parent_bdev", 00:05:48.190 "bdev_lvol_set_parent", 00:05:48.190 "bdev_lvol_check_shallow_copy", 00:05:48.190 "bdev_lvol_start_shallow_copy", 00:05:48.190 "bdev_lvol_grow_lvstore", 00:05:48.190 "bdev_lvol_get_lvols", 00:05:48.190 "bdev_lvol_get_lvstores", 00:05:48.190 "bdev_lvol_delete", 00:05:48.190 "bdev_lvol_set_read_only", 00:05:48.190 "bdev_lvol_resize", 00:05:48.190 "bdev_lvol_decouple_parent", 00:05:48.190 "bdev_lvol_inflate", 00:05:48.190 "bdev_lvol_rename", 00:05:48.190 "bdev_lvol_clone_bdev", 00:05:48.190 "bdev_lvol_clone", 00:05:48.190 "bdev_lvol_snapshot", 00:05:48.190 "bdev_lvol_create", 00:05:48.190 "bdev_lvol_delete_lvstore", 00:05:48.190 "bdev_lvol_rename_lvstore", 00:05:48.190 "bdev_lvol_create_lvstore", 00:05:48.190 "bdev_raid_set_options", 00:05:48.190 "bdev_raid_remove_base_bdev", 00:05:48.190 "bdev_raid_add_base_bdev", 00:05:48.190 "bdev_raid_delete", 00:05:48.190 "bdev_raid_create", 00:05:48.190 "bdev_raid_get_bdevs", 00:05:48.190 "bdev_error_inject_error", 00:05:48.190 "bdev_error_delete", 00:05:48.190 "bdev_error_create", 00:05:48.190 "bdev_split_delete", 00:05:48.190 "bdev_split_create", 00:05:48.190 "bdev_delay_delete", 00:05:48.190 "bdev_delay_create", 00:05:48.190 "bdev_delay_update_latency", 00:05:48.190 "bdev_zone_block_delete", 00:05:48.190 "bdev_zone_block_create", 00:05:48.190 "blobfs_create", 00:05:48.190 "blobfs_detect", 00:05:48.190 "blobfs_set_cache_size", 00:05:48.190 "bdev_aio_delete", 00:05:48.190 "bdev_aio_rescan", 00:05:48.191 "bdev_aio_create", 00:05:48.191 "bdev_ftl_set_property", 00:05:48.191 "bdev_ftl_get_properties", 00:05:48.191 "bdev_ftl_get_stats", 00:05:48.191 "bdev_ftl_unmap", 00:05:48.191 "bdev_ftl_unload", 00:05:48.191 "bdev_ftl_delete", 00:05:48.191 "bdev_ftl_load", 00:05:48.191 "bdev_ftl_create", 00:05:48.191 "bdev_virtio_attach_controller", 00:05:48.191 "bdev_virtio_scsi_get_devices", 00:05:48.191 "bdev_virtio_detach_controller", 00:05:48.191 "bdev_virtio_blk_set_hotplug", 00:05:48.191 "bdev_iscsi_delete", 00:05:48.191 "bdev_iscsi_create", 00:05:48.191 "bdev_iscsi_set_options", 00:05:48.191 "accel_error_inject_error", 00:05:48.191 "ioat_scan_accel_module", 00:05:48.191 "dsa_scan_accel_module", 00:05:48.191 "iaa_scan_accel_module", 00:05:48.191 "vfu_virtio_create_scsi_endpoint", 00:05:48.191 "vfu_virtio_scsi_remove_target", 00:05:48.191 "vfu_virtio_scsi_add_target", 00:05:48.191 "vfu_virtio_create_blk_endpoint", 00:05:48.191 "vfu_virtio_delete_endpoint", 00:05:48.191 "keyring_file_remove_key", 00:05:48.191 "keyring_file_add_key", 00:05:48.191 "keyring_linux_set_options", 00:05:48.191 "iscsi_get_histogram", 00:05:48.191 "iscsi_enable_histogram", 00:05:48.191 "iscsi_set_options", 00:05:48.191 "iscsi_get_auth_groups", 00:05:48.191 "iscsi_auth_group_remove_secret", 00:05:48.191 "iscsi_auth_group_add_secret", 00:05:48.191 "iscsi_delete_auth_group", 00:05:48.191 "iscsi_create_auth_group", 00:05:48.191 "iscsi_set_discovery_auth", 00:05:48.191 "iscsi_get_options", 00:05:48.191 "iscsi_target_node_request_logout", 00:05:48.191 "iscsi_target_node_set_redirect", 00:05:48.191 "iscsi_target_node_set_auth", 00:05:48.191 "iscsi_target_node_add_lun", 00:05:48.191 "iscsi_get_stats", 00:05:48.191 "iscsi_get_connections", 00:05:48.191 "iscsi_portal_group_set_auth", 00:05:48.191 "iscsi_start_portal_group", 00:05:48.191 "iscsi_delete_portal_group", 00:05:48.191 "iscsi_create_portal_group", 00:05:48.191 "iscsi_get_portal_groups", 00:05:48.191 "iscsi_delete_target_node", 00:05:48.191 "iscsi_target_node_remove_pg_ig_maps", 00:05:48.191 "iscsi_target_node_add_pg_ig_maps", 00:05:48.191 "iscsi_create_target_node", 00:05:48.191 "iscsi_get_target_nodes", 00:05:48.191 "iscsi_delete_initiator_group", 00:05:48.191 "iscsi_initiator_group_remove_initiators", 00:05:48.191 "iscsi_initiator_group_add_initiators", 00:05:48.191 "iscsi_create_initiator_group", 00:05:48.191 "iscsi_get_initiator_groups", 00:05:48.191 "nvmf_set_crdt", 00:05:48.191 "nvmf_set_config", 00:05:48.191 "nvmf_set_max_subsystems", 00:05:48.191 "nvmf_stop_mdns_prr", 00:05:48.191 "nvmf_publish_mdns_prr", 00:05:48.191 "nvmf_subsystem_get_listeners", 00:05:48.191 "nvmf_subsystem_get_qpairs", 00:05:48.191 "nvmf_subsystem_get_controllers", 00:05:48.191 "nvmf_get_stats", 00:05:48.191 "nvmf_get_transports", 00:05:48.191 "nvmf_create_transport", 00:05:48.191 "nvmf_get_targets", 00:05:48.191 "nvmf_delete_target", 00:05:48.191 "nvmf_create_target", 00:05:48.191 "nvmf_subsystem_allow_any_host", 00:05:48.191 "nvmf_subsystem_remove_host", 00:05:48.191 "nvmf_subsystem_add_host", 00:05:48.191 "nvmf_ns_remove_host", 00:05:48.191 "nvmf_ns_add_host", 00:05:48.191 "nvmf_subsystem_remove_ns", 00:05:48.191 "nvmf_subsystem_add_ns", 00:05:48.191 "nvmf_subsystem_listener_set_ana_state", 00:05:48.191 "nvmf_discovery_get_referrals", 00:05:48.191 "nvmf_discovery_remove_referral", 00:05:48.191 "nvmf_discovery_add_referral", 00:05:48.191 "nvmf_subsystem_remove_listener", 00:05:48.191 "nvmf_subsystem_add_listener", 00:05:48.191 "nvmf_delete_subsystem", 00:05:48.191 "nvmf_create_subsystem", 00:05:48.191 "nvmf_get_subsystems", 00:05:48.191 "env_dpdk_get_mem_stats", 00:05:48.191 "nbd_get_disks", 00:05:48.191 "nbd_stop_disk", 00:05:48.191 "nbd_start_disk", 00:05:48.191 "ublk_recover_disk", 00:05:48.191 "ublk_get_disks", 00:05:48.191 "ublk_stop_disk", 00:05:48.191 "ublk_start_disk", 00:05:48.191 "ublk_destroy_target", 00:05:48.191 "ublk_create_target", 00:05:48.191 "virtio_blk_create_transport", 00:05:48.191 "virtio_blk_get_transports", 00:05:48.191 "vhost_controller_set_coalescing", 00:05:48.191 "vhost_get_controllers", 00:05:48.191 "vhost_delete_controller", 00:05:48.191 "vhost_create_blk_controller", 00:05:48.191 "vhost_scsi_controller_remove_target", 00:05:48.191 "vhost_scsi_controller_add_target", 00:05:48.191 "vhost_start_scsi_controller", 00:05:48.191 "vhost_create_scsi_controller", 00:05:48.191 "thread_set_cpumask", 00:05:48.191 "framework_get_governor", 00:05:48.191 "framework_get_scheduler", 00:05:48.191 "framework_set_scheduler", 00:05:48.191 "framework_get_reactors", 00:05:48.191 "thread_get_io_channels", 00:05:48.191 "thread_get_pollers", 00:05:48.191 "thread_get_stats", 00:05:48.191 "framework_monitor_context_switch", 00:05:48.191 "spdk_kill_instance", 00:05:48.191 "log_enable_timestamps", 00:05:48.191 "log_get_flags", 00:05:48.191 "log_clear_flag", 00:05:48.191 "log_set_flag", 00:05:48.191 "log_get_level", 00:05:48.191 "log_set_level", 00:05:48.191 "log_get_print_level", 00:05:48.191 "log_set_print_level", 00:05:48.191 "framework_enable_cpumask_locks", 00:05:48.191 "framework_disable_cpumask_locks", 00:05:48.191 "framework_wait_init", 00:05:48.191 "framework_start_init", 00:05:48.191 "scsi_get_devices", 00:05:48.191 "bdev_get_histogram", 00:05:48.191 "bdev_enable_histogram", 00:05:48.191 "bdev_set_qos_limit", 00:05:48.192 "bdev_set_qd_sampling_period", 00:05:48.192 "bdev_get_bdevs", 00:05:48.192 "bdev_reset_iostat", 00:05:48.192 "bdev_get_iostat", 00:05:48.192 "bdev_examine", 00:05:48.192 "bdev_wait_for_examine", 00:05:48.192 "bdev_set_options", 00:05:48.192 "notify_get_notifications", 00:05:48.192 "notify_get_types", 00:05:48.192 "accel_get_stats", 00:05:48.192 "accel_set_options", 00:05:48.192 "accel_set_driver", 00:05:48.192 "accel_crypto_key_destroy", 00:05:48.192 "accel_crypto_keys_get", 00:05:48.192 "accel_crypto_key_create", 00:05:48.192 "accel_assign_opc", 00:05:48.192 "accel_get_module_info", 00:05:48.192 "accel_get_opc_assignments", 00:05:48.192 "vmd_rescan", 00:05:48.192 "vmd_remove_device", 00:05:48.192 "vmd_enable", 00:05:48.192 "sock_get_default_impl", 00:05:48.192 "sock_set_default_impl", 00:05:48.192 "sock_impl_set_options", 00:05:48.192 "sock_impl_get_options", 00:05:48.192 "iobuf_get_stats", 00:05:48.192 "iobuf_set_options", 00:05:48.192 "keyring_get_keys", 00:05:48.192 "framework_get_pci_devices", 00:05:48.192 "framework_get_config", 00:05:48.192 "framework_get_subsystems", 00:05:48.192 "vfu_tgt_set_base_path", 00:05:48.192 "trace_get_info", 00:05:48.192 "trace_get_tpoint_group_mask", 00:05:48.192 "trace_disable_tpoint_group", 00:05:48.192 "trace_enable_tpoint_group", 00:05:48.192 "trace_clear_tpoint_mask", 00:05:48.192 "trace_set_tpoint_mask", 00:05:48.192 "spdk_get_version", 00:05:48.192 "rpc_get_methods" 00:05:48.192 ] 00:05:48.192 15:01:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.192 15:01:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:48.192 15:01:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 39617 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 39617 ']' 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 39617 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 39617 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 39617' 00:05:48.192 killing process with pid 39617 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 39617 00:05:48.192 15:01:40 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 39617 00:05:48.453 00:05:48.453 real 0m1.412s 00:05:48.453 user 0m2.586s 00:05:48.453 sys 0m0.416s 00:05:48.453 15:01:40 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.453 15:01:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.453 ************************************ 00:05:48.453 END TEST spdkcli_tcp 00:05:48.453 ************************************ 00:05:48.453 15:01:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.453 15:01:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.453 15:01:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.453 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.453 ************************************ 00:05:48.453 START TEST dpdk_mem_utility 00:05:48.453 ************************************ 00:05:48.453 15:01:40 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.714 * Looking for test storage... 00:05:48.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:48.714 15:01:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.714 15:01:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.714 15:01:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=39905 00:05:48.714 15:01:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 39905 00:05:48.714 15:01:40 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 39905 ']' 00:05:48.714 15:01:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.714 15:01:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.714 15:01:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.714 15:01:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.714 15:01:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.714 [2024-07-25 15:01:40.736351] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:48.714 [2024-07-25 15:01:40.736408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39905 ] 00:05:48.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.714 [2024-07-25 15:01:40.795392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.714 [2024-07-25 15:01:40.861749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.656 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.656 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:49.656 15:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:49.656 15:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:49.656 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.656 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.656 { 00:05:49.656 "filename": "/tmp/spdk_mem_dump.txt" 00:05:49.656 } 00:05:49.657 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.657 15:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:49.657 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:49.657 1 heaps totaling size 814.000000 MiB 00:05:49.657 size: 814.000000 MiB heap id: 0 00:05:49.657 end heaps---------- 00:05:49.657 8 mempools totaling size 598.116089 MiB 00:05:49.657 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:49.657 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:49.657 size: 84.521057 MiB name: bdev_io_39905 00:05:49.657 size: 51.011292 MiB name: evtpool_39905 00:05:49.657 size: 50.003479 MiB name: msgpool_39905 00:05:49.657 size: 21.763794 MiB name: PDU_Pool 00:05:49.657 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:49.657 size: 0.026123 MiB name: Session_Pool 00:05:49.657 end mempools------- 00:05:49.657 6 memzones totaling size 4.142822 MiB 00:05:49.657 size: 1.000366 MiB name: RG_ring_0_39905 00:05:49.657 size: 1.000366 MiB name: RG_ring_1_39905 00:05:49.657 size: 1.000366 MiB name: RG_ring_4_39905 00:05:49.657 size: 1.000366 MiB name: RG_ring_5_39905 00:05:49.657 size: 0.125366 MiB name: RG_ring_2_39905 00:05:49.657 size: 0.015991 MiB name: RG_ring_3_39905 00:05:49.657 end memzones------- 00:05:49.657 15:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:49.657 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:49.657 list of free elements. size: 12.519348 MiB 00:05:49.657 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:49.657 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:49.657 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:49.657 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:49.657 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:49.657 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:49.657 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:49.657 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:49.657 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:49.657 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:49.657 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:49.657 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:49.657 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:49.657 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:49.657 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:49.657 list of standard malloc elements. size: 199.218079 MiB 00:05:49.657 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:49.657 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:49.657 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:49.657 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:49.657 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:49.657 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:49.657 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:49.657 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:49.657 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:49.657 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:49.657 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:49.657 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:49.657 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:49.657 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:49.657 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:49.657 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:49.657 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:49.657 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:49.657 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:49.657 list of memzone associated elements. size: 602.262573 MiB 00:05:49.657 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:49.657 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:49.657 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:49.657 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:49.657 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:49.657 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_39905_0 00:05:49.657 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:49.657 associated memzone info: size: 48.002930 MiB name: MP_evtpool_39905_0 00:05:49.657 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:49.657 associated memzone info: size: 48.002930 MiB name: MP_msgpool_39905_0 00:05:49.657 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:49.657 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:49.657 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:49.657 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:49.657 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:49.657 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_39905 00:05:49.657 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:49.657 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_39905 00:05:49.657 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:49.657 associated memzone info: size: 1.007996 MiB name: MP_evtpool_39905 00:05:49.657 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:49.657 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:49.657 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:49.657 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:49.657 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:49.657 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:49.657 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:49.657 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:49.657 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:49.657 associated memzone info: size: 1.000366 MiB name: RG_ring_0_39905 00:05:49.657 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:49.657 associated memzone info: size: 1.000366 MiB name: RG_ring_1_39905 00:05:49.657 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:49.657 associated memzone info: size: 1.000366 MiB name: RG_ring_4_39905 00:05:49.657 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:49.657 associated memzone info: size: 1.000366 MiB name: RG_ring_5_39905 00:05:49.657 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:49.657 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_39905 00:05:49.657 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:49.657 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:49.657 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:49.657 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:49.657 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:49.657 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:49.657 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:49.657 associated memzone info: size: 0.125366 MiB name: RG_ring_2_39905 00:05:49.657 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:49.657 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:49.657 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:49.657 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:49.657 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:49.657 associated memzone info: size: 0.015991 MiB name: RG_ring_3_39905 00:05:49.657 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:49.657 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:49.657 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:49.657 associated memzone info: size: 0.000183 MiB name: MP_msgpool_39905 00:05:49.657 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:49.657 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_39905 00:05:49.657 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:49.658 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:49.658 15:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:49.658 15:01:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 39905 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 39905 ']' 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 39905 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 39905 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 39905' 00:05:49.658 killing process with pid 39905 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 39905 00:05:49.658 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 39905 00:05:49.919 00:05:49.919 real 0m1.251s 00:05:49.919 user 0m1.309s 00:05:49.919 sys 0m0.342s 00:05:49.919 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.919 15:01:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.919 ************************************ 00:05:49.919 END TEST dpdk_mem_utility 00:05:49.919 ************************************ 00:05:49.919 15:01:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.919 15:01:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.919 15:01:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.919 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.919 ************************************ 00:05:49.919 START TEST event 00:05:49.919 ************************************ 00:05:49.919 15:01:41 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.919 * Looking for test storage... 00:05:49.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.919 15:01:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:49.919 15:01:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.919 15:01:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.919 15:01:42 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:49.919 15:01:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.919 15:01:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.919 ************************************ 00:05:49.919 START TEST event_perf 00:05:49.919 ************************************ 00:05:49.919 15:01:42 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.919 Running I/O for 1 seconds...[2024-07-25 15:01:42.089747] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:49.919 [2024-07-25 15:01:42.089845] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40237 ] 00:05:50.180 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.180 [2024-07-25 15:01:42.170674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.180 [2024-07-25 15:01:42.248250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.180 [2024-07-25 15:01:42.248317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.180 [2024-07-25 15:01:42.248492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.180 Running I/O for 1 seconds...[2024-07-25 15:01:42.248492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.121 00:05:51.121 lcore 0: 179248 00:05:51.121 lcore 1: 179242 00:05:51.121 lcore 2: 179242 00:05:51.121 lcore 3: 179245 00:05:51.121 done. 00:05:51.121 00:05:51.121 real 0m1.236s 00:05:51.121 user 0m4.145s 00:05:51.121 sys 0m0.087s 00:05:51.121 15:01:43 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.121 15:01:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.121 ************************************ 00:05:51.121 END TEST event_perf 00:05:51.121 ************************************ 00:05:51.397 15:01:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.397 15:01:43 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:51.397 15:01:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.397 15:01:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.397 ************************************ 00:05:51.397 START TEST event_reactor 00:05:51.397 ************************************ 00:05:51.397 15:01:43 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.397 [2024-07-25 15:01:43.399259] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:51.397 [2024-07-25 15:01:43.399359] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40594 ] 00:05:51.397 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.397 [2024-07-25 15:01:43.461813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.397 [2024-07-25 15:01:43.525191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.781 test_start 00:05:52.781 oneshot 00:05:52.781 tick 100 00:05:52.781 tick 100 00:05:52.781 tick 250 00:05:52.781 tick 100 00:05:52.781 tick 100 00:05:52.781 tick 100 00:05:52.781 tick 250 00:05:52.781 tick 500 00:05:52.781 tick 100 00:05:52.781 tick 100 00:05:52.781 tick 250 00:05:52.781 tick 100 00:05:52.781 tick 100 00:05:52.781 test_end 00:05:52.781 00:05:52.781 real 0m1.201s 00:05:52.781 user 0m1.126s 00:05:52.781 sys 0m0.070s 00:05:52.781 15:01:44 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.781 15:01:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.781 ************************************ 00:05:52.781 END TEST event_reactor 00:05:52.781 ************************************ 00:05:52.781 15:01:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.781 15:01:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:52.781 15:01:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.781 15:01:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.781 ************************************ 00:05:52.781 START TEST event_reactor_perf 00:05:52.781 ************************************ 00:05:52.781 15:01:44 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.781 [2024-07-25 15:01:44.678154] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:52.781 [2024-07-25 15:01:44.678263] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40942 ] 00:05:52.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.781 [2024-07-25 15:01:44.740381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.781 [2024-07-25 15:01:44.803172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.723 test_start 00:05:53.723 test_end 00:05:53.723 Performance: 366523 events per second 00:05:53.723 00:05:53.723 real 0m1.201s 00:05:53.723 user 0m1.124s 00:05:53.723 sys 0m0.073s 00:05:53.723 15:01:45 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.723 15:01:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.723 ************************************ 00:05:53.723 END TEST event_reactor_perf 00:05:53.724 ************************************ 00:05:53.724 15:01:45 event -- event/event.sh@49 -- # uname -s 00:05:53.724 15:01:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.724 15:01:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.724 15:01:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.724 15:01:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.724 15:01:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.985 ************************************ 00:05:53.985 START TEST event_scheduler 00:05:53.985 ************************************ 00:05:53.985 15:01:45 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.985 * Looking for test storage... 00:05:53.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:53.985 15:01:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.985 15:01:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=41196 00:05:53.985 15:01:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.985 15:01:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.985 15:01:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 41196 00:05:53.985 15:01:46 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 41196 ']' 00:05:53.985 15:01:46 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.985 15:01:46 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.985 15:01:46 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.985 15:01:46 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.985 15:01:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.985 [2024-07-25 15:01:46.088295] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:53.985 [2024-07-25 15:01:46.088362] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41196 ] 00:05:53.985 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.985 [2024-07-25 15:01:46.142118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.245 [2024-07-25 15:01:46.206699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.245 [2024-07-25 15:01:46.206822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.245 [2024-07-25 15:01:46.206977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.245 [2024-07-25 15:01:46.206978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:54.814 15:01:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.814 [2024-07-25 15:01:46.877081] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:54.814 [2024-07-25 15:01:46.877098] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:54.814 [2024-07-25 15:01:46.877106] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.814 [2024-07-25 15:01:46.877109] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.814 [2024-07-25 15:01:46.877113] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.814 15:01:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.814 [2024-07-25 15:01:46.935472] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.814 15:01:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.814 15:01:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.814 ************************************ 00:05:54.814 START TEST scheduler_create_thread 00:05:54.814 ************************************ 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.814 2 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.814 3 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.814 15:01:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.814 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.814 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 4 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 5 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 6 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 7 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 8 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 9 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.075 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.336 10 00:05:55.336 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.336 15:01:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.336 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.336 15:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.720 15:01:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.720 15:01:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.720 15:01:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.720 15:01:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.720 15:01:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.662 15:01:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.662 15:01:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.662 15:01:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.662 15:01:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.264 15:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.264 15:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.264 15:01:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.264 15:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.264 15:01:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.239 15:01:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.239 00:05:59.239 real 0m4.223s 00:05:59.239 user 0m0.027s 00:05:59.239 sys 0m0.004s 00:05:59.239 15:01:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.239 15:01:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.239 ************************************ 00:05:59.239 END TEST scheduler_create_thread 00:05:59.239 ************************************ 00:05:59.239 15:01:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.239 15:01:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 41196 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 41196 ']' 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 41196 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 41196 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 41196' 00:05:59.240 killing process with pid 41196 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 41196 00:05:59.240 15:01:51 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 41196 00:05:59.500 [2024-07-25 15:01:51.477056] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.500 00:05:59.500 real 0m5.707s 00:05:59.500 user 0m12.757s 00:05:59.500 sys 0m0.355s 00:05:59.500 15:01:51 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.500 15:01:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.500 ************************************ 00:05:59.500 END TEST event_scheduler 00:05:59.500 ************************************ 00:05:59.500 15:01:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.761 15:01:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.762 15:01:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.762 15:01:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.762 15:01:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.762 ************************************ 00:05:59.762 START TEST app_repeat 00:05:59.762 ************************************ 00:05:59.762 15:01:51 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=42386 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 42386' 00:05:59.762 Process app_repeat pid: 42386 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.762 spdk_app_start Round 0 00:05:59.762 15:01:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 42386 /var/tmp/spdk-nbd.sock 00:05:59.762 15:01:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 42386 ']' 00:05:59.762 15:01:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.762 15:01:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.762 15:01:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.762 15:01:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.762 15:01:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.762 [2024-07-25 15:01:51.749286] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:59.762 [2024-07-25 15:01:51.749344] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42386 ] 00:05:59.762 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.762 [2024-07-25 15:01:51.809946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.762 [2024-07-25 15:01:51.874477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.762 [2024-07-25 15:01:51.874568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.705 15:01:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.705 15:01:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.705 15:01:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.705 Malloc0 00:06:00.705 15:01:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.705 Malloc1 00:06:00.705 15:01:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.705 15:01:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.966 /dev/nbd0 00:06:00.966 15:01:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.966 15:01:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.966 1+0 records in 00:06:00.966 1+0 records out 00:06:00.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303202 s, 13.5 MB/s 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.966 15:01:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.966 15:01:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.966 15:01:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.966 15:01:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.227 /dev/nbd1 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.227 1+0 records in 00:06:01.227 1+0 records out 00:06:01.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241532 s, 17.0 MB/s 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.227 15:01:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.227 { 00:06:01.227 "nbd_device": "/dev/nbd0", 00:06:01.227 "bdev_name": "Malloc0" 00:06:01.227 }, 00:06:01.227 { 00:06:01.227 "nbd_device": "/dev/nbd1", 00:06:01.227 "bdev_name": "Malloc1" 00:06:01.227 } 00:06:01.227 ]' 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.227 { 00:06:01.227 "nbd_device": "/dev/nbd0", 00:06:01.227 "bdev_name": "Malloc0" 00:06:01.227 }, 00:06:01.227 { 00:06:01.227 "nbd_device": "/dev/nbd1", 00:06:01.227 "bdev_name": "Malloc1" 00:06:01.227 } 00:06:01.227 ]' 00:06:01.227 15:01:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.487 15:01:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.487 /dev/nbd1' 00:06:01.487 15:01:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.487 /dev/nbd1' 00:06:01.487 15:01:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.487 15:01:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.487 15:01:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.487 15:01:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.487 15:01:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.487 15:01:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.488 256+0 records in 00:06:01.488 256+0 records out 00:06:01.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012279 s, 85.4 MB/s 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.488 256+0 records in 00:06:01.488 256+0 records out 00:06:01.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158281 s, 66.2 MB/s 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.488 256+0 records in 00:06:01.488 256+0 records out 00:06:01.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333294 s, 31.5 MB/s 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.488 15:01:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.748 15:01:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.748 15:01:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.748 15:01:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.749 15:01:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.009 15:01:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.009 15:01:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.270 15:01:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.270 [2024-07-25 15:01:54.382573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.270 [2024-07-25 15:01:54.446341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.270 [2024-07-25 15:01:54.446344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.531 [2024-07-25 15:01:54.477889] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.531 [2024-07-25 15:01:54.477928] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.078 15:01:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.078 15:01:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.078 spdk_app_start Round 1 00:06:05.078 15:01:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 42386 /var/tmp/spdk-nbd.sock 00:06:05.078 15:01:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 42386 ']' 00:06:05.078 15:01:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.078 15:01:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.078 15:01:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.078 15:01:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.078 15:01:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.339 15:01:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.339 15:01:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:05.339 15:01:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.600 Malloc0 00:06:05.600 15:01:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.600 Malloc1 00:06:05.600 15:01:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.600 15:01:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.601 15:01:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.601 15:01:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.601 15:01:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.601 15:01:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.862 /dev/nbd0 00:06:05.862 15:01:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.862 15:01:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.862 1+0 records in 00:06:05.862 1+0 records out 00:06:05.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203291 s, 20.1 MB/s 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.862 15:01:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.862 15:01:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.862 15:01:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.862 15:01:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.124 /dev/nbd1 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.124 1+0 records in 00:06:06.124 1+0 records out 00:06:06.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273311 s, 15.0 MB/s 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.124 15:01:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.124 { 00:06:06.124 "nbd_device": "/dev/nbd0", 00:06:06.124 "bdev_name": "Malloc0" 00:06:06.124 }, 00:06:06.124 { 00:06:06.124 "nbd_device": "/dev/nbd1", 00:06:06.124 "bdev_name": "Malloc1" 00:06:06.124 } 00:06:06.124 ]' 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.124 { 00:06:06.124 "nbd_device": "/dev/nbd0", 00:06:06.124 "bdev_name": "Malloc0" 00:06:06.124 }, 00:06:06.124 { 00:06:06.124 "nbd_device": "/dev/nbd1", 00:06:06.124 "bdev_name": "Malloc1" 00:06:06.124 } 00:06:06.124 ]' 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.124 /dev/nbd1' 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.124 /dev/nbd1' 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.124 15:01:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.386 256+0 records in 00:06:06.386 256+0 records out 00:06:06.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00294281 s, 356 MB/s 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.386 256+0 records in 00:06:06.386 256+0 records out 00:06:06.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163231 s, 64.2 MB/s 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.386 256+0 records in 00:06:06.386 256+0 records out 00:06:06.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169222 s, 62.0 MB/s 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.386 15:01:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.387 15:01:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.387 15:01:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.387 15:01:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.648 15:01:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.909 15:01:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.909 15:01:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.169 15:01:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.170 [2024-07-25 15:01:59.247588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.170 [2024-07-25 15:01:59.311155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.170 [2024-07-25 15:01:59.311158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.170 [2024-07-25 15:01:59.343701] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.170 [2024-07-25 15:01:59.343738] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.471 15:02:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.471 15:02:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:10.471 spdk_app_start Round 2 00:06:10.471 15:02:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 42386 /var/tmp/spdk-nbd.sock 00:06:10.471 15:02:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 42386 ']' 00:06:10.471 15:02:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.471 15:02:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.471 15:02:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.471 15:02:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.471 15:02:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.471 15:02:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.471 15:02:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:10.471 15:02:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.471 Malloc0 00:06:10.471 15:02:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.471 Malloc1 00:06:10.471 15:02:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.471 15:02:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.732 /dev/nbd0 00:06:10.732 15:02:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.732 15:02:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.732 15:02:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:10.732 15:02:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:10.732 15:02:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.732 15:02:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.733 1+0 records in 00:06:10.733 1+0 records out 00:06:10.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264539 s, 15.5 MB/s 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.733 15:02:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:10.733 15:02:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.733 15:02:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.733 15:02:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.995 /dev/nbd1 00:06:10.995 15:02:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.995 15:02:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.995 1+0 records in 00:06:10.995 1+0 records out 00:06:10.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285038 s, 14.4 MB/s 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.995 15:02:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:10.995 15:02:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.995 15:02:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.995 15:02:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.995 15:02:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.995 15:02:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.995 { 00:06:10.995 "nbd_device": "/dev/nbd0", 00:06:10.995 "bdev_name": "Malloc0" 00:06:10.995 }, 00:06:10.995 { 00:06:10.995 "nbd_device": "/dev/nbd1", 00:06:10.995 "bdev_name": "Malloc1" 00:06:10.995 } 00:06:10.995 ]' 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.995 { 00:06:10.995 "nbd_device": "/dev/nbd0", 00:06:10.995 "bdev_name": "Malloc0" 00:06:10.995 }, 00:06:10.995 { 00:06:10.995 "nbd_device": "/dev/nbd1", 00:06:10.995 "bdev_name": "Malloc1" 00:06:10.995 } 00:06:10.995 ]' 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.995 /dev/nbd1' 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.995 /dev/nbd1' 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.995 15:02:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.259 256+0 records in 00:06:11.259 256+0 records out 00:06:11.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117661 s, 89.1 MB/s 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.259 256+0 records in 00:06:11.259 256+0 records out 00:06:11.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162656 s, 64.5 MB/s 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.259 256+0 records in 00:06:11.259 256+0 records out 00:06:11.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172069 s, 60.9 MB/s 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.259 15:02:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.519 15:02:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.519 15:02:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.519 15:02:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.519 15:02:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.519 15:02:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.520 15:02:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.520 15:02:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.520 15:02:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.520 15:02:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.520 15:02:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.520 15:02:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.783 15:02:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.783 15:02:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.044 15:02:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.044 [2024-07-25 15:02:04.111198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.044 [2024-07-25 15:02:04.174446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.044 [2024-07-25 15:02:04.174449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.044 [2024-07-25 15:02:04.206017] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.044 [2024-07-25 15:02:04.206052] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.347 15:02:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 42386 /var/tmp/spdk-nbd.sock 00:06:15.347 15:02:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 42386 ']' 00:06:15.347 15:02:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.347 15:02:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.347 15:02:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.347 15:02:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.347 15:02:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:15.347 15:02:07 event.app_repeat -- event/event.sh@39 -- # killprocess 42386 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 42386 ']' 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 42386 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 42386 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 42386' 00:06:15.347 killing process with pid 42386 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@969 -- # kill 42386 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@974 -- # wait 42386 00:06:15.347 spdk_app_start is called in Round 0. 00:06:15.347 Shutdown signal received, stop current app iteration 00:06:15.347 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:15.347 spdk_app_start is called in Round 1. 00:06:15.347 Shutdown signal received, stop current app iteration 00:06:15.347 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:15.347 spdk_app_start is called in Round 2. 00:06:15.347 Shutdown signal received, stop current app iteration 00:06:15.347 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:15.347 spdk_app_start is called in Round 3. 00:06:15.347 Shutdown signal received, stop current app iteration 00:06:15.347 15:02:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:15.347 15:02:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:15.347 00:06:15.347 real 0m15.594s 00:06:15.347 user 0m33.608s 00:06:15.347 sys 0m2.129s 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.347 15:02:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.347 ************************************ 00:06:15.347 END TEST app_repeat 00:06:15.347 ************************************ 00:06:15.348 15:02:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:15.348 15:02:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.348 15:02:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.348 15:02:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.348 15:02:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.348 ************************************ 00:06:15.348 START TEST cpu_locks 00:06:15.348 ************************************ 00:06:15.348 15:02:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.348 * Looking for test storage... 00:06:15.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:15.348 15:02:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.348 15:02:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.348 15:02:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.348 15:02:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.348 15:02:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.348 15:02:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.348 15:02:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.348 ************************************ 00:06:15.348 START TEST default_locks 00:06:15.348 ************************************ 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=45649 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 45649 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 45649 ']' 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.348 15:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.609 [2024-07-25 15:02:07.575325] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:15.609 [2024-07-25 15:02:07.575386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45649 ] 00:06:15.609 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.609 [2024-07-25 15:02:07.641528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.609 [2024-07-25 15:02:07.713455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.181 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.181 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:16.181 15:02:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 45649 00:06:16.181 15:02:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 45649 00:06:16.181 15:02:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.442 lslocks: write error 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 45649 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 45649 ']' 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 45649 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 45649 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 45649' 00:06:16.442 killing process with pid 45649 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 45649 00:06:16.442 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 45649 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 45649 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 45649 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 45649 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 45649 ']' 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (45649) - No such process 00:06:16.703 ERROR: process (pid: 45649) is no longer running 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.703 00:06:16.703 real 0m1.269s 00:06:16.703 user 0m1.334s 00:06:16.703 sys 0m0.427s 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.703 15:02:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.703 ************************************ 00:06:16.703 END TEST default_locks 00:06:16.703 ************************************ 00:06:16.703 15:02:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:16.703 15:02:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.703 15:02:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.703 15:02:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.703 ************************************ 00:06:16.703 START TEST default_locks_via_rpc 00:06:16.703 ************************************ 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=46009 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 46009 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 46009 ']' 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.703 15:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.968 [2024-07-25 15:02:08.922949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:16.968 [2024-07-25 15:02:08.923000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46009 ] 00:06:16.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.968 [2024-07-25 15:02:08.982387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.968 [2024-07-25 15:02:09.046297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 46009 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 46009 00:06:17.568 15:02:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 46009 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 46009 ']' 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 46009 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 46009 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 46009' 00:06:18.140 killing process with pid 46009 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 46009 00:06:18.140 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 46009 00:06:18.401 00:06:18.401 real 0m1.481s 00:06:18.401 user 0m1.601s 00:06:18.401 sys 0m0.478s 00:06:18.401 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.401 15:02:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.401 ************************************ 00:06:18.401 END TEST default_locks_via_rpc 00:06:18.401 ************************************ 00:06:18.401 15:02:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:18.401 15:02:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.401 15:02:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.401 15:02:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.401 ************************************ 00:06:18.401 START TEST non_locking_app_on_locked_coremask 00:06:18.401 ************************************ 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=46371 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 46371 /var/tmp/spdk.sock 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 46371 ']' 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.401 15:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.401 [2024-07-25 15:02:10.464706] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:18.401 [2024-07-25 15:02:10.464753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46371 ] 00:06:18.401 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.401 [2024-07-25 15:02:10.524293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.401 [2024-07-25 15:02:10.588750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=46505 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 46505 /var/tmp/spdk2.sock 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 46505 ']' 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.345 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.345 [2024-07-25 15:02:11.288212] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:19.345 [2024-07-25 15:02:11.288269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46505 ] 00:06:19.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.345 [2024-07-25 15:02:11.378722] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.345 [2024-07-25 15:02:11.378758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.345 [2024-07-25 15:02:11.512210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.917 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.917 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.917 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 46371 00:06:19.917 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 46371 00:06:19.917 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.178 lslocks: write error 00:06:20.178 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 46371 00:06:20.178 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 46371 ']' 00:06:20.179 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 46371 00:06:20.179 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.179 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.179 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 46371 00:06:20.179 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.179 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.179 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 46371' 00:06:20.179 killing process with pid 46371 00:06:20.179 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 46371 00:06:20.179 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 46371 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 46505 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 46505 ']' 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 46505 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 46505 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 46505' 00:06:20.751 killing process with pid 46505 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 46505 00:06:20.751 15:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 46505 00:06:21.012 00:06:21.012 real 0m2.615s 00:06:21.012 user 0m2.867s 00:06:21.012 sys 0m0.752s 00:06:21.012 15:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.012 15:02:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.012 ************************************ 00:06:21.012 END TEST non_locking_app_on_locked_coremask 00:06:21.012 ************************************ 00:06:21.012 15:02:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:21.012 15:02:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.012 15:02:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.013 15:02:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.013 ************************************ 00:06:21.013 START TEST locking_app_on_unlocked_coremask 00:06:21.013 ************************************ 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=46974 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 46974 /var/tmp/spdk.sock 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 46974 ']' 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.013 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.013 [2024-07-25 15:02:13.166438] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:21.013 [2024-07-25 15:02:13.166499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46974 ] 00:06:21.013 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.274 [2024-07-25 15:02:13.230211] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.274 [2024-07-25 15:02:13.230248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.274 [2024-07-25 15:02:13.300668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=47097 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 47097 /var/tmp/spdk2.sock 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 47097 ']' 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.844 15:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.844 [2024-07-25 15:02:13.995298] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:21.844 [2024-07-25 15:02:13.995355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47097 ] 00:06:21.844 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.104 [2024-07-25 15:02:14.085273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.104 [2024-07-25 15:02:14.218630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.676 15:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.676 15:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:22.676 15:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 47097 00:06:22.676 15:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 47097 00:06:22.676 15:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.247 lslocks: write error 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 46974 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 46974 ']' 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 46974 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 46974 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 46974' 00:06:23.247 killing process with pid 46974 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 46974 00:06:23.247 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 46974 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 47097 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 47097 ']' 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 47097 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 47097 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 47097' 00:06:23.818 killing process with pid 47097 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 47097 00:06:23.818 15:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 47097 00:06:24.078 00:06:24.078 real 0m2.962s 00:06:24.078 user 0m3.246s 00:06:24.078 sys 0m0.894s 00:06:24.078 15:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.078 15:02:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.078 ************************************ 00:06:24.078 END TEST locking_app_on_unlocked_coremask 00:06:24.078 ************************************ 00:06:24.078 15:02:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:24.078 15:02:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.078 15:02:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.078 15:02:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.079 ************************************ 00:06:24.079 START TEST locking_app_on_locked_coremask 00:06:24.079 ************************************ 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=47498 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 47498 /var/tmp/spdk.sock 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 47498 ']' 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.079 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.079 [2024-07-25 15:02:16.193420] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:24.079 [2024-07-25 15:02:16.193472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47498 ] 00:06:24.079 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.079 [2024-07-25 15:02:16.256122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.339 [2024-07-25 15:02:16.323996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=47801 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 47801 /var/tmp/spdk2.sock 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 47801 /var/tmp/spdk2.sock 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 47801 /var/tmp/spdk2.sock 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 47801 ']' 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.910 15:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.910 [2024-07-25 15:02:17.028918] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:24.910 [2024-07-25 15:02:17.028971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47801 ] 00:06:24.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.170 [2024-07-25 15:02:17.116155] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 47498 has claimed it. 00:06:25.170 [2024-07-25 15:02:17.116198] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (47801) - No such process 00:06:25.431 ERROR: process (pid: 47801) is no longer running 00:06:25.431 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.431 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:25.431 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:25.431 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.431 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.431 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.431 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 47498 00:06:25.431 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 47498 00:06:25.431 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.003 lslocks: write error 00:06:26.003 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 47498 00:06:26.003 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 47498 ']' 00:06:26.003 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 47498 00:06:26.003 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:26.003 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.003 15:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 47498 00:06:26.003 15:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.003 15:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.003 15:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 47498' 00:06:26.003 killing process with pid 47498 00:06:26.003 15:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 47498 00:06:26.003 15:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 47498 00:06:26.264 00:06:26.264 real 0m2.081s 00:06:26.264 user 0m2.320s 00:06:26.264 sys 0m0.566s 00:06:26.264 15:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.264 15:02:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.264 ************************************ 00:06:26.264 END TEST locking_app_on_locked_coremask 00:06:26.264 ************************************ 00:06:26.264 15:02:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:26.264 15:02:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.264 15:02:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.264 15:02:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.264 ************************************ 00:06:26.264 START TEST locking_overlapped_coremask 00:06:26.264 ************************************ 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=48111 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 48111 /var/tmp/spdk.sock 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 48111 ']' 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.264 15:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.264 [2024-07-25 15:02:18.348264] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:26.264 [2024-07-25 15:02:18.348316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48111 ] 00:06:26.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.264 [2024-07-25 15:02:18.407716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.524 [2024-07-25 15:02:18.477008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.524 [2024-07-25 15:02:18.477125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.524 [2024-07-25 15:02:18.477128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=48181 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 48181 /var/tmp/spdk2.sock 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 48181 /var/tmp/spdk2.sock 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 48181 /var/tmp/spdk2.sock 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 48181 ']' 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.096 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.096 [2024-07-25 15:02:19.175650] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:27.096 [2024-07-25 15:02:19.175701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48181 ] 00:06:27.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.096 [2024-07-25 15:02:19.246482] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 48111 has claimed it. 00:06:27.096 [2024-07-25 15:02:19.246513] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (48181) - No such process 00:06:27.668 ERROR: process (pid: 48181) is no longer running 00:06:27.668 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.668 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:27.668 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:27.668 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.668 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.668 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 48111 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 48111 ']' 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 48111 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 48111 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 48111' 00:06:27.669 killing process with pid 48111 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 48111 00:06:27.669 15:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 48111 00:06:27.930 00:06:27.930 real 0m1.759s 00:06:27.930 user 0m4.991s 00:06:27.930 sys 0m0.366s 00:06:27.930 15:02:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.930 15:02:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.930 ************************************ 00:06:27.930 END TEST locking_overlapped_coremask 00:06:27.930 ************************************ 00:06:27.930 15:02:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.930 15:02:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.930 15:02:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.930 15:02:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.191 ************************************ 00:06:28.191 START TEST locking_overlapped_coremask_via_rpc 00:06:28.191 ************************************ 00:06:28.191 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:28.191 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=48535 00:06:28.192 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 48535 /var/tmp/spdk.sock 00:06:28.192 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.192 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 48535 ']' 00:06:28.192 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.192 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.192 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.192 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.192 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.192 [2024-07-25 15:02:20.179877] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:28.192 [2024-07-25 15:02:20.179922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48535 ] 00:06:28.192 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.192 [2024-07-25 15:02:20.238680] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.192 [2024-07-25 15:02:20.238712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.192 [2024-07-25 15:02:20.305508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.192 [2024-07-25 15:02:20.307215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.192 [2024-07-25 15:02:20.307229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=48556 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 48556 /var/tmp/spdk2.sock 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 48556 ']' 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.764 15:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.024 [2024-07-25 15:02:20.990461] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:29.024 [2024-07-25 15:02:20.990514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48556 ] 00:06:29.024 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.024 [2024-07-25 15:02:21.060473] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.024 [2024-07-25 15:02:21.060500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.024 [2024-07-25 15:02:21.166198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.024 [2024-07-25 15:02:21.173275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.024 [2024-07-25 15:02:21.173277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.595 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.595 [2024-07-25 15:02:21.776267] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 48535 has claimed it. 00:06:29.595 request: 00:06:29.595 { 00:06:29.595 "method": "framework_enable_cpumask_locks", 00:06:29.595 "req_id": 1 00:06:29.595 } 00:06:29.595 Got JSON-RPC error response 00:06:29.856 response: 00:06:29.856 { 00:06:29.856 "code": -32603, 00:06:29.856 "message": "Failed to claim CPU core: 2" 00:06:29.856 } 00:06:29.856 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 48535 /var/tmp/spdk.sock 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 48535 ']' 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 48556 /var/tmp/spdk2.sock 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 48556 ']' 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.857 15:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.118 15:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.118 15:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:30.118 15:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.118 15:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.118 15:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.118 15:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.118 00:06:30.118 real 0m2.002s 00:06:30.118 user 0m0.777s 00:06:30.118 sys 0m0.153s 00:06:30.118 15:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.118 15:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.118 ************************************ 00:06:30.118 END TEST locking_overlapped_coremask_via_rpc 00:06:30.118 ************************************ 00:06:30.118 15:02:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.118 15:02:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 48535 ]] 00:06:30.118 15:02:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 48535 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 48535 ']' 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 48535 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 48535 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 48535' 00:06:30.118 killing process with pid 48535 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 48535 00:06:30.118 15:02:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 48535 00:06:30.378 15:02:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 48556 ]] 00:06:30.378 15:02:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 48556 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 48556 ']' 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 48556 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 48556 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 48556' 00:06:30.378 killing process with pid 48556 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 48556 00:06:30.378 15:02:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 48556 00:06:30.638 15:02:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.638 15:02:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:30.638 15:02:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 48535 ]] 00:06:30.638 15:02:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 48535 00:06:30.638 15:02:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 48535 ']' 00:06:30.638 15:02:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 48535 00:06:30.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (48535) - No such process 00:06:30.638 15:02:22 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 48535 is not found' 00:06:30.638 Process with pid 48535 is not found 00:06:30.638 15:02:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 48556 ]] 00:06:30.638 15:02:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 48556 00:06:30.638 15:02:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 48556 ']' 00:06:30.638 15:02:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 48556 00:06:30.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (48556) - No such process 00:06:30.638 15:02:22 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 48556 is not found' 00:06:30.638 Process with pid 48556 is not found 00:06:30.638 15:02:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.638 00:06:30.638 real 0m15.316s 00:06:30.638 user 0m26.778s 00:06:30.638 sys 0m4.486s 00:06:30.638 15:02:22 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.638 15:02:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.638 ************************************ 00:06:30.638 END TEST cpu_locks 00:06:30.638 ************************************ 00:06:30.638 00:06:30.638 real 0m40.804s 00:06:30.638 user 1m19.727s 00:06:30.638 sys 0m7.591s 00:06:30.638 15:02:22 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.638 15:02:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.638 ************************************ 00:06:30.638 END TEST event 00:06:30.638 ************************************ 00:06:30.638 15:02:22 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:30.638 15:02:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.638 15:02:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.638 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.638 ************************************ 00:06:30.638 START TEST thread 00:06:30.638 ************************************ 00:06:30.638 15:02:22 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:30.899 * Looking for test storage... 00:06:30.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:30.899 15:02:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.899 15:02:22 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:30.899 15:02:22 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.899 15:02:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.899 ************************************ 00:06:30.899 START TEST thread_poller_perf 00:06:30.899 ************************************ 00:06:30.899 15:02:22 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.899 [2024-07-25 15:02:22.963473] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:30.899 [2024-07-25 15:02:22.963583] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49021 ] 00:06:30.899 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.899 [2024-07-25 15:02:23.032650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.159 [2024-07-25 15:02:23.107945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.159 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.128 ====================================== 00:06:32.128 busy:2413265040 (cyc) 00:06:32.128 total_run_count: 288000 00:06:32.128 tsc_hz: 2400000000 (cyc) 00:06:32.128 ====================================== 00:06:32.128 poller_cost: 8379 (cyc), 3491 (nsec) 00:06:32.128 00:06:32.128 real 0m1.230s 00:06:32.128 user 0m1.147s 00:06:32.128 sys 0m0.079s 00:06:32.128 15:02:24 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.128 15:02:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.128 ************************************ 00:06:32.128 END TEST thread_poller_perf 00:06:32.128 ************************************ 00:06:32.128 15:02:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.128 15:02:24 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:32.128 15:02:24 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.128 15:02:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.128 ************************************ 00:06:32.128 START TEST thread_poller_perf 00:06:32.128 ************************************ 00:06:32.128 15:02:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.128 [2024-07-25 15:02:24.270388] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:32.128 [2024-07-25 15:02:24.270482] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49343 ] 00:06:32.128 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.389 [2024-07-25 15:02:24.336023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.389 [2024-07-25 15:02:24.399216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.389 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:33.332 ====================================== 00:06:33.332 busy:2402094242 (cyc) 00:06:33.332 total_run_count: 3807000 00:06:33.332 tsc_hz: 2400000000 (cyc) 00:06:33.332 ====================================== 00:06:33.332 poller_cost: 630 (cyc), 262 (nsec) 00:06:33.332 00:06:33.332 real 0m1.207s 00:06:33.332 user 0m1.131s 00:06:33.332 sys 0m0.071s 00:06:33.332 15:02:25 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.332 15:02:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.332 ************************************ 00:06:33.332 END TEST thread_poller_perf 00:06:33.332 ************************************ 00:06:33.332 15:02:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:33.332 00:06:33.332 real 0m2.688s 00:06:33.332 user 0m2.371s 00:06:33.332 sys 0m0.324s 00:06:33.332 15:02:25 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.332 15:02:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.332 ************************************ 00:06:33.332 END TEST thread 00:06:33.332 ************************************ 00:06:33.594 15:02:25 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:33.594 15:02:25 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:33.594 15:02:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.594 15:02:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.594 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:33.594 ************************************ 00:06:33.594 START TEST app_cmdline 00:06:33.594 ************************************ 00:06:33.594 15:02:25 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:33.594 * Looking for test storage... 00:06:33.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:33.594 15:02:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:33.594 15:02:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=49732 00:06:33.594 15:02:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 49732 00:06:33.594 15:02:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:33.594 15:02:25 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 49732 ']' 00:06:33.594 15:02:25 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.594 15:02:25 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.594 15:02:25 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.594 15:02:25 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.594 15:02:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:33.594 [2024-07-25 15:02:25.732732] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:33.594 [2024-07-25 15:02:25.732798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49732 ] 00:06:33.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.855 [2024-07-25 15:02:25.799189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.855 [2024-07-25 15:02:25.872804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.426 15:02:26 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.426 15:02:26 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:34.426 15:02:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:34.687 { 00:06:34.687 "version": "SPDK v24.09-pre git sha1 704257090", 00:06:34.687 "fields": { 00:06:34.687 "major": 24, 00:06:34.687 "minor": 9, 00:06:34.687 "patch": 0, 00:06:34.687 "suffix": "-pre", 00:06:34.687 "commit": "704257090" 00:06:34.687 } 00:06:34.687 } 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:34.687 15:02:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:34.687 15:02:26 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.687 request: 00:06:34.687 { 00:06:34.687 "method": "env_dpdk_get_mem_stats", 00:06:34.687 "req_id": 1 00:06:34.687 } 00:06:34.687 Got JSON-RPC error response 00:06:34.687 response: 00:06:34.687 { 00:06:34.687 "code": -32601, 00:06:34.687 "message": "Method not found" 00:06:34.687 } 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.948 15:02:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 49732 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 49732 ']' 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 49732 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 49732 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 49732' 00:06:34.948 killing process with pid 49732 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@969 -- # kill 49732 00:06:34.948 15:02:26 app_cmdline -- common/autotest_common.sh@974 -- # wait 49732 00:06:35.209 00:06:35.209 real 0m1.582s 00:06:35.209 user 0m1.903s 00:06:35.209 sys 0m0.409s 00:06:35.209 15:02:27 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.209 15:02:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.209 ************************************ 00:06:35.209 END TEST app_cmdline 00:06:35.210 ************************************ 00:06:35.210 15:02:27 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:35.210 15:02:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.210 15:02:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.210 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:35.210 ************************************ 00:06:35.210 START TEST version 00:06:35.210 ************************************ 00:06:35.210 15:02:27 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:35.210 * Looking for test storage... 00:06:35.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:35.210 15:02:27 version -- app/version.sh@17 -- # get_header_version major 00:06:35.210 15:02:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:35.210 15:02:27 version -- app/version.sh@14 -- # cut -f2 00:06:35.210 15:02:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.210 15:02:27 version -- app/version.sh@17 -- # major=24 00:06:35.210 15:02:27 version -- app/version.sh@18 -- # get_header_version minor 00:06:35.210 15:02:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:35.210 15:02:27 version -- app/version.sh@14 -- # cut -f2 00:06:35.210 15:02:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.210 15:02:27 version -- app/version.sh@18 -- # minor=9 00:06:35.210 15:02:27 version -- app/version.sh@19 -- # get_header_version patch 00:06:35.210 15:02:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:35.210 15:02:27 version -- app/version.sh@14 -- # cut -f2 00:06:35.210 15:02:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.210 15:02:27 version -- app/version.sh@19 -- # patch=0 00:06:35.210 15:02:27 version -- app/version.sh@20 -- # get_header_version suffix 00:06:35.210 15:02:27 version -- app/version.sh@14 -- # cut -f2 00:06:35.210 15:02:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:35.210 15:02:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.210 15:02:27 version -- app/version.sh@20 -- # suffix=-pre 00:06:35.210 15:02:27 version -- app/version.sh@22 -- # version=24.9 00:06:35.210 15:02:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:35.210 15:02:27 version -- app/version.sh@28 -- # version=24.9rc0 00:06:35.210 15:02:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:35.210 15:02:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:35.471 15:02:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:35.471 15:02:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:35.471 00:06:35.471 real 0m0.179s 00:06:35.471 user 0m0.097s 00:06:35.471 sys 0m0.124s 00:06:35.471 15:02:27 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.471 15:02:27 version -- common/autotest_common.sh@10 -- # set +x 00:06:35.471 ************************************ 00:06:35.471 END TEST version 00:06:35.471 ************************************ 00:06:35.471 15:02:27 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:35.471 15:02:27 -- spdk/autotest.sh@202 -- # uname -s 00:06:35.471 15:02:27 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:35.471 15:02:27 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:35.471 15:02:27 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:35.471 15:02:27 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:35.471 15:02:27 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:35.471 15:02:27 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:35.471 15:02:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.471 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:35.471 15:02:27 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:35.471 15:02:27 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:35.471 15:02:27 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:35.471 15:02:27 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:35.471 15:02:27 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:35.471 15:02:27 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:35.471 15:02:27 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:35.471 15:02:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:35.471 15:02:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.471 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:35.471 ************************************ 00:06:35.471 START TEST nvmf_tcp 00:06:35.471 ************************************ 00:06:35.471 15:02:27 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:35.471 * Looking for test storage... 00:06:35.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:35.471 15:02:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:35.471 15:02:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:35.471 15:02:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:35.471 15:02:27 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:35.471 15:02:27 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.471 15:02:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.733 ************************************ 00:06:35.733 START TEST nvmf_target_core 00:06:35.733 ************************************ 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:35.733 * Looking for test storage... 00:06:35.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:35.733 ************************************ 00:06:35.733 START TEST nvmf_abort 00:06:35.733 ************************************ 00:06:35.733 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:35.995 * Looking for test storage... 00:06:35.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:35.995 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:42.592 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:42.592 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.592 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:42.593 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:42.593 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:42.593 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:42.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:06:42.855 00:06:42.855 --- 10.0.0.2 ping statistics --- 00:06:42.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.855 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:06:42.855 00:06:42.855 --- 10.0.0.1 ping statistics --- 00:06:42.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.855 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=53873 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 53873 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 53873 ']' 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.855 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:42.855 [2024-07-25 15:02:35.008703] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:42.855 [2024-07-25 15:02:35.008790] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.116 [2024-07-25 15:02:35.097402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.116 [2024-07-25 15:02:35.190590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.116 [2024-07-25 15:02:35.190648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.116 [2024-07-25 15:02:35.190657] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.116 [2024-07-25 15:02:35.190664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.116 [2024-07-25 15:02:35.190670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.116 [2024-07-25 15:02:35.190799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.116 [2024-07-25 15:02:35.190963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.116 [2024-07-25 15:02:35.190965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 [2024-07-25 15:02:35.812851] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 Malloc0 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 Delay0 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 [2024-07-25 15:02:35.871186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.715 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:43.715 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.976 [2024-07-25 15:02:35.925953] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:45.888 Initializing NVMe Controllers 00:06:45.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:45.888 controller IO queue size 128 less than required 00:06:45.888 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:45.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:45.888 Initialization complete. Launching workers. 00:06:45.888 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 28482 00:06:45.888 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28544, failed to submit 62 00:06:45.888 success 28486, unsuccess 58, failed 0 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:45.888 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:45.888 rmmod nvme_tcp 00:06:45.888 rmmod nvme_fabrics 00:06:45.888 rmmod nvme_keyring 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 53873 ']' 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 53873 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 53873 ']' 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 53873 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.888 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 53873 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 53873' 00:06:46.149 killing process with pid 53873 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 53873 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 53873 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.149 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:48.697 00:06:48.697 real 0m12.481s 00:06:48.697 user 0m12.706s 00:06:48.697 sys 0m6.145s 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.697 ************************************ 00:06:48.697 END TEST nvmf_abort 00:06:48.697 ************************************ 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.697 ************************************ 00:06:48.697 START TEST nvmf_ns_hotplug_stress 00:06:48.697 ************************************ 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:48.697 * Looking for test storage... 00:06:48.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:48.697 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:55.350 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:55.350 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:55.350 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:55.350 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:55.350 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:55.351 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:55.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:06:55.616 00:06:55.616 --- 10.0.0.2 ping statistics --- 00:06:55.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.616 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:06:55.616 00:06:55.616 --- 10.0.0.1 ping statistics --- 00:06:55.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.616 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=58792 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 58792 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 58792 ']' 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.616 15:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 [2024-07-25 15:02:47.694496] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:55.616 [2024-07-25 15:02:47.694556] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.616 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.616 [2024-07-25 15:02:47.778854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.877 [2024-07-25 15:02:47.860683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.877 [2024-07-25 15:02:47.860744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.877 [2024-07-25 15:02:47.860752] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.877 [2024-07-25 15:02:47.860759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.877 [2024-07-25 15:02:47.860764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.877 [2024-07-25 15:02:47.860885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.877 [2024-07-25 15:02:47.861049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.877 [2024-07-25 15:02:47.861050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.449 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.449 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:56.449 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:56.449 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:56.449 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:56.449 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.449 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:56.449 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:56.710 [2024-07-25 15:02:48.672259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.710 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:56.710 15:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.971 [2024-07-25 15:02:49.021122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.971 15:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:57.232 15:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:57.232 Malloc0 00:06:57.232 15:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:57.493 Delay0 00:06:57.493 15:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.753 15:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:57.753 NULL1 00:06:57.753 15:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:58.014 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:58.014 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=59262 00:06:58.014 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:06:58.014 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.014 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.275 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.275 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:58.275 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:58.536 true 00:06:58.536 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:06:58.536 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.796 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.796 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:58.796 15:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:59.057 true 00:06:59.057 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:06:59.057 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.318 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.318 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:59.318 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:59.579 true 00:06:59.579 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:06:59.579 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.579 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.839 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:59.839 15:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:00.100 true 00:07:00.100 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:00.100 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.100 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.361 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:00.361 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:00.361 true 00:07:00.622 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:00.622 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.563 Read completed with error (sct=0, sc=11) 00:07:01.563 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.563 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:01.563 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:01.563 true 00:07:01.823 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:01.823 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.823 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.084 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:02.084 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:02.084 true 00:07:02.344 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:02.344 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.344 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.605 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:02.605 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:02.605 true 00:07:02.605 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:02.605 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.866 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.126 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:03.126 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:03.126 true 00:07:03.126 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:03.126 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.387 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.648 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:03.648 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:03.648 true 00:07:03.648 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:03.648 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.909 15:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.170 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:04.170 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:04.170 true 00:07:04.170 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:04.170 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.431 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.432 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:04.432 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:04.692 true 00:07:04.693 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:04.693 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.954 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.955 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:04.955 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:05.216 true 00:07:05.216 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:05.216 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.477 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.477 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:05.477 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:05.739 true 00:07:05.739 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:05.739 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.683 15:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.943 15:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:06.943 15:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:06.943 true 00:07:06.943 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:06.943 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.216 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.476 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:07.476 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:07.476 true 00:07:07.476 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:07.476 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.736 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.031 [2024-07-25 15:02:59.948252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.948983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.949981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.031 [2024-07-25 15:02:59.950329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.950991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.951987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.952980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.953008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.953038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.032 [2024-07-25 15:02:59.953069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.953708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.954982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.033 [2024-07-25 15:02:59.955661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.955690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.955718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.955747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.955776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.955806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.955836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.955867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.955898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.956973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.957993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.034 [2024-07-25 15:02:59.958808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.958834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.958862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.958888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.958915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.958942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.958968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.958998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.959984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.960973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.035 [2024-07-25 15:02:59.961612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.961973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.962998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.036 [2024-07-25 15:02:59.963996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.964741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.037 [2024-07-25 15:02:59.965436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.965991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.037 [2024-07-25 15:02:59.966823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.966852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.966879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.966907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.966941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.967986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.968998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.969979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.970016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.970040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.038 [2024-07-25 15:02:59.970071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.970994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.971976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.039 [2024-07-25 15:02:59.972871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.972899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.972927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.972957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.972986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.973994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.974981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.040 [2024-07-25 15:02:59.975748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.975777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.975814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.975844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.976998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:08.041 [2024-07-25 15:02:59.977554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:08.041 [2024-07-25 15:02:59.977926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.977978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.978007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.978354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.978386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.978415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.041 [2024-07-25 15:02:59.978443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.978977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.979985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.980994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.042 [2024-07-25 15:02:59.981424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.981996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.982993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.983999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.984024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.984053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.984085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.984122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.984150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.984178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.984213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.984244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.043 [2024-07-25 15:02:59.984274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.984888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.985979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.986998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.987025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.987054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.987084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.987115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.987146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.987175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.044 [2024-07-25 15:02:59.987209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.987983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.988970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.989986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.045 [2024-07-25 15:02:59.990391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.990969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.991716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.992991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.993021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.993050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.046 [2024-07-25 15:02:59.993078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.993941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.994983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.047 [2024-07-25 15:02:59.995916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.995949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.995984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.996334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.997996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.998941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.999296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.999327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.999358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.999387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.999417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.999449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.999479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.999511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.048 [2024-07-25 15:02:59.999544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:02:59.999995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.000582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.001788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.049 [2024-07-25 15:03:00.002297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.002980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.003010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.049 [2024-07-25 15:03:00.003042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.003975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.004993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.050 [2024-07-25 15:03:00.005927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.005957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.006992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.007979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.008640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.009009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.009041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.009069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.009096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.051 [2024-07-25 15:03:00.009123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.009975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.052 [2024-07-25 15:03:00.010552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.010880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.011971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.012965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.013333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.013371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.013402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.013429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.053 [2024-07-25 15:03:00.013457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.013974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.014991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.054 [2024-07-25 15:03:00.015806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.015834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.015860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.015890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.015918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.015951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.015976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.016989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.017990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.055 [2024-07-25 15:03:00.018390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.018977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.019997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.020998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.021026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.021055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.021078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.021107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.021136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.056 [2024-07-25 15:03:00.021166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.021695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.022982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.057 [2024-07-25 15:03:00.023753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.023778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.023808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.023834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.023860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.024977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.025989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.058 [2024-07-25 15:03:00.026016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.026985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.027993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.028974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.029003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.029029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.059 [2024-07-25 15:03:00.029054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.029997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.030973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.060 [2024-07-25 15:03:00.031611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.031977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.032671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.033975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.061 [2024-07-25 15:03:00.034328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.034804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.035980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.062 [2024-07-25 15:03:00.036696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.036968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.037296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.037327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.037370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.062 [2024-07-25 15:03:00.037401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.037979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.038760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.063 [2024-07-25 15:03:00.039839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.039869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.039900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.039928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.039958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.039985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.040980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.041974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.064 [2024-07-25 15:03:00.042724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.042757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.042889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.042924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.042953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.042991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.043991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.044997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.045988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.065 [2024-07-25 15:03:00.046316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.046975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.047992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.048990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.066 [2024-07-25 15:03:00.049016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.049786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.050973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.067 [2024-07-25 15:03:00.051794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.051821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.051847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.051870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.051898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.051926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.051956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.051983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.052977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.053978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.068 [2024-07-25 15:03:00.054758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.054784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.054809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.054836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.054865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.054894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.054924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.054951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.054976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.055988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.056970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.057000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.057611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.069 [2024-07-25 15:03:00.057641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.057993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.058982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.059999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.060024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.060052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.060086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.060112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.060140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.060168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.060194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.070 [2024-07-25 15:03:00.060225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.060984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.061997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.062970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.063005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.063043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.071 [2024-07-25 15:03:00.063071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.063783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.064998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.072 [2024-07-25 15:03:00.065961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.066991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.067988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.068989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.069014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.069041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.069071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.069098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.069127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.069156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.069185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.069220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.073 [2024-07-25 15:03:00.069250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.069972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.070980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 [2024-07-25 15:03:00.071691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.074 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.075 [2024-07-25 15:03:00.071719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.071997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.072805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.073999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.075 [2024-07-25 15:03:00.074502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.074864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.075982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.076997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.076 [2024-07-25 15:03:00.077653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.077972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.078991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.079986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.077 [2024-07-25 15:03:00.080403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.080966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.081986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.082978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.083004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.083030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.083061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.083091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.083120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.083148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.083178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.078 [2024-07-25 15:03:00.083211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.083988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.084973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.079 [2024-07-25 15:03:00.085908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.085937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.086993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.087997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.088977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.089006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.089035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.089062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.089092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.089123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.089152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.089177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.089213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.080 [2024-07-25 15:03:00.089242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.089971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.090977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.091979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.092008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.081 [2024-07-25 15:03:00.092036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.092639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.093982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.082 [2024-07-25 15:03:00.094692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.094722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.095984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.096984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.083 [2024-07-25 15:03:00.097339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.097981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.098983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.084 [2024-07-25 15:03:00.099858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.099887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.099915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.099942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.099968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.099994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.100991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.085 [2024-07-25 15:03:00.101917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.101946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.101977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.102991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.103978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.086 [2024-07-25 15:03:00.104654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.104973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.105978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.087 [2024-07-25 15:03:00.106527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.106995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.087 [2024-07-25 15:03:00.107583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.107945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.108987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.109981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.088 [2024-07-25 15:03:00.110815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.110845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.110873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.110901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.110928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.110958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.110988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.111998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.112998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.089 [2024-07-25 15:03:00.113633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.113978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.114747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.115975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.116004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.116033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.116061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.090 [2024-07-25 15:03:00.116088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.116989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.117981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.091 [2024-07-25 15:03:00.118959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.118994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.119980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.120976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.121992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.122026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.122057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.122104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.122135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.122161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.092 [2024-07-25 15:03:00.122192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.122970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.123764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.093 [2024-07-25 15:03:00.124921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.124950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.124979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.125979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.126983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 true 00:07:08.094 [2024-07-25 15:03:00.127633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.094 [2024-07-25 15:03:00.127763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.127790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.127816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.127845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.127871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.127900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.127926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.127969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.127997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.128993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.129990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.095 [2024-07-25 15:03:00.130275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.130997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.131978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.132998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.096 [2024-07-25 15:03:00.133297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.133982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.134715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.135997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.136025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.136053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.136080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.136109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.136137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.136164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.136190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.097 [2024-07-25 15:03:00.136230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.136980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.137990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.098 [2024-07-25 15:03:00.138950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.138980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.139983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.140990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.141989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.142014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.142041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.099 [2024-07-25 15:03:00.142070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.142097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.099 [2024-07-25 15:03:00.142124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.142997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.143592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.144987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.145016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.100 [2024-07-25 15:03:00.145043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.145938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.146993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.101 [2024-07-25 15:03:00.147646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.147994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.148975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.149998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.102 [2024-07-25 15:03:00.150441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.150793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.150821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.150844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.150868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.150895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.150927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.150967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.150998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.151751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.152991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:08.103 [2024-07-25 15:03:00.153611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.103 [2024-07-25 15:03:00.153693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.153730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.153757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.153784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.153812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.153838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.153864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.153893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.153923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.153951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.104 [2024-07-25 15:03:00.153978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.154878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.155978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.104 [2024-07-25 15:03:00.156297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.156983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.157978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.158991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.159018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.159047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.159075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.105 [2024-07-25 15:03:00.159104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.159975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.160957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.106 [2024-07-25 15:03:00.161940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.161972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.161995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.162983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.163668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.107 [2024-07-25 15:03:00.164988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.165990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.166992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.108 [2024-07-25 15:03:00.167589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.167991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.168977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.169880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.109 [2024-07-25 15:03:00.170612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.170990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.171982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.172975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.110 [2024-07-25 15:03:00.173357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.173972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.174987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.111 [2024-07-25 15:03:00.175896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.175926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.175954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.175982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.176971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.112 [2024-07-25 15:03:00.177001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.177981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.112 [2024-07-25 15:03:00.178506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.178536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.178566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.178594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.178623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.178654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.178680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.179988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.180952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.113 [2024-07-25 15:03:00.181709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.181744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.181775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.181813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.181850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.181882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.181908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.181934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.181960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.181987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.182973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.183990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.114 [2024-07-25 15:03:00.184480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.184997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.185990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.186993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.187021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.187049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.187073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.187100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.187122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.187144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.187171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.115 [2024-07-25 15:03:00.187208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.116 [2024-07-25 15:03:00.187234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.116 [2024-07-25 15:03:00.187259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.116 [2024-07-25 15:03:00.187287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.116 [2024-07-25 15:03:00.187316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.116 [2024-07-25 15:03:00.187344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.116 [2024-07-25 15:03:00.187374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.187981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.188977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.409 [2024-07-25 15:03:00.189476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.189505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.189540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.189573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.189981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.190977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.191976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.410 [2024-07-25 15:03:00.192778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.192812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.192840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.192872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.192897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.192926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.192953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.192980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.193948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.194994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.411 [2024-07-25 15:03:00.195572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.195988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.196980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.197999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.412 [2024-07-25 15:03:00.198713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.198985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.199986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.200997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.201977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.413 [2024-07-25 15:03:00.202000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.202647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.203999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.204998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.205992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.206981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.207008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.207034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.207068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.414 [2024-07-25 15:03:00.207095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.207995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.208840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.209987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.210987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 Message suppressed 999 times: [2024-07-25 15:03:00.211428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 Read completed with error (sct=0, sc=15) 00:07:08.415 [2024-07-25 15:03:00.211649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.211993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.415 [2024-07-25 15:03:00.212520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.212979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.213977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.214983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.215818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.216998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.416 [2024-07-25 15:03:00.217496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.217521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.217927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.217958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.217982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.218990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.219991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.220984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.417 [2024-07-25 15:03:00.221804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.221837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.221865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.221893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.221915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.222988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.223995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.224994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.225024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.418 [2024-07-25 15:03:00.225049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.225981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.226981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.419 [2024-07-25 15:03:00.227907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.227942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.227967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.227992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.228992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.229982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.420 [2024-07-25 15:03:00.230623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.230650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.230679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.230707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.230736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.231983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.232978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.233005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.233032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.233068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.233107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.233137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.233163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.233186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.233218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.421 [2024-07-25 15:03:00.233248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.233966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.234973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.422 [2024-07-25 15:03:00.235835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.235861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.235888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.235917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.235945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.235973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.236812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.237989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.423 [2024-07-25 15:03:00.238978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.239997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.240999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.424 [2024-07-25 15:03:00.241550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.241976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.242997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.243999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.425 [2024-07-25 15:03:00.244363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.244906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.426 [2024-07-25 15:03:00.245112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.245999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.426 [2024-07-25 15:03:00.246533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.246561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.246590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.246617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.246642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.246679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.246711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.246737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.246765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.247977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.248998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.427 [2024-07-25 15:03:00.249572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.249972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.250982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.251987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.428 [2024-07-25 15:03:00.252446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.252999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.253980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.254991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.429 [2024-07-25 15:03:00.255332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.255980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.256995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.257957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.258189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.258223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.258248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.258276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.258303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.258329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.258356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.430 [2024-07-25 15:03:00.258382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.258978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.259837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.260977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.261003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.261030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.261055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.261082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.261108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.261134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.261158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.261182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.431 [2024-07-25 15:03:00.261209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.261987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.262989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.432 [2024-07-25 15:03:00.263806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.263834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.263968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.263996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.264985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.265996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.266977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.267005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.267034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.267062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.267088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.267113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.267140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.267167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.267203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.433 [2024-07-25 15:03:00.267230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.267986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.268977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.269988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.270017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.434 [2024-07-25 15:03:00.270049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.270532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.271996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.272953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.273000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.273033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.273061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.435 [2024-07-25 15:03:00.273304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.273996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.274993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.275969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.276000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.276026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.276057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.276087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.276117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.436 [2024-07-25 15:03:00.276146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.276980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.277989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.437 [2024-07-25 15:03:00.278586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.278999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.279993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.438 [2024-07-25 15:03:00.280021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.280724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.438 [2024-07-25 15:03:00.281676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.281993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.282971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.283983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.439 [2024-07-25 15:03:00.284560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.284992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.285997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.286993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.440 [2024-07-25 15:03:00.287801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.287830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.287858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.287886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.287915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.287942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.287971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.287998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.288971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.289989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.441 [2024-07-25 15:03:00.290715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.290742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.290768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.290795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.290823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.290852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.290884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.290918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.290952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.290988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.291862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.292988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.442 [2024-07-25 15:03:00.293489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.293828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.294985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.295982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.443 [2024-07-25 15:03:00.296889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.296916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.296941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.296974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.297977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.298349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.444 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.444 [2024-07-25 15:03:00.477546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.477993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.478020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.478049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.478075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.444 [2024-07-25 15:03:00.478106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.478981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.479990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.480971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 Message suppressed 999 times: [2024-07-25 15:03:00.481001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 Read completed with error (sct=0, sc=15) 00:07:08.445 [2024-07-25 15:03:00.481030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.481059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.481082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.481112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.481141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.481169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.445 [2024-07-25 15:03:00.481205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.481998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.482674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.446 [2024-07-25 15:03:00.483915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.483950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.483981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.484994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.485988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.447 [2024-07-25 15:03:00.486425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.486970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.487974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.488983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.489011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.489040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.448 [2024-07-25 15:03:00.489069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.489973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.490983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.491999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.449 [2024-07-25 15:03:00.492591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.492978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.493842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.494957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.450 [2024-07-25 15:03:00.495573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.495980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.496986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.497989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.498998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.499024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.499049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.499076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.451 [2024-07-25 15:03:00.499104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.499989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.500976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.501982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.502010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.502039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.502067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.502095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.452 [2024-07-25 15:03:00.502121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.502756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.503973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.504990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:08.453 [2024-07-25 15:03:00.505734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.505995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.506024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.453 [2024-07-25 15:03:00.506053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:08.454 [2024-07-25 15:03:00.506109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.506997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.507981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.508994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.509806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.454 [2024-07-25 15:03:00.510223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.510988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.511981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.512981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.513012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.455 [2024-07-25 15:03:00.513041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.513983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.514974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.456 [2024-07-25 15:03:00.515677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.515975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.457 [2024-07-25 15:03:00.516096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.516577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.517988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.457 [2024-07-25 15:03:00.518464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.518996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.519971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.520994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.458 [2024-07-25 15:03:00.521858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.521884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.521916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.521945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.521974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.522986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.523833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.459 [2024-07-25 15:03:00.524891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.524920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.524946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.524974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.525982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.526966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.527977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.528004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.460 [2024-07-25 15:03:00.528049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.528995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.529991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.461 [2024-07-25 15:03:00.530955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.530986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.531992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.532855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.533991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.534018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.462 [2024-07-25 15:03:00.534045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.534995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.535980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.536976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.537007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.537036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.537063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.537088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.537118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.537149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.537176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.537207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.463 [2024-07-25 15:03:00.537234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.537993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.538774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.539966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.464 [2024-07-25 15:03:00.540671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.540989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.541981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.542985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.465 [2024-07-25 15:03:00.543733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.543759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.543789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.543816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.543843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.543871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.544992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.545984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.466 [2024-07-25 15:03:00.546800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.546827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.546855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.546882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.546910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.546940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.546969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.546997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.547999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.548477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.549969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.550993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.467 [2024-07-25 15:03:00.551410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.551984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.552012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.552043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.552073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.552111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.467 [2024-07-25 15:03:00.552138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.552978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.553982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.554983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.555975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.556968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.557971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.558000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.558029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.558058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.558088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.558117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.558145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.558174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.468 [2024-07-25 15:03:00.558207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.558984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.559981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.560997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.561982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.562992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.563019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.469 [2024-07-25 15:03:00.563048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.563999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.564973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.565988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.566983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.567664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.568950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.470 [2024-07-25 15:03:00.569473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.569998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.570036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.570075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.570116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.570155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.471 [2024-07-25 15:03:00.570191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.570999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.571985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.759 [2024-07-25 15:03:00.572960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.572989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.573987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.574987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.575984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.576010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.576038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.576066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.576094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.576128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.576162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.576204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.576233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.760 [2024-07-25 15:03:00.576261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.576824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.577991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.578878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.761 [2024-07-25 15:03:00.579289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.579975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.580895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.581988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.762 [2024-07-25 15:03:00.582295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.582994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.583975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.763 [2024-07-25 15:03:00.584540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.763 [2024-07-25 15:03:00.584886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.584914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.584942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.584976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.585996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.586987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.587793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.764 [2024-07-25 15:03:00.588020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.588974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.589972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.590993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.591028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.591057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.591085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.591110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.591142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.591173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.591204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.591234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.765 [2024-07-25 15:03:00.591264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.591986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.592983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.593828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.766 [2024-07-25 15:03:00.594529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.594972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.595956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.596962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.767 [2024-07-25 15:03:00.597921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.597949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.597979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.598991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.599999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.768 [2024-07-25 15:03:00.600640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.600988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.601809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.602978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.769 [2024-07-25 15:03:00.603801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.603830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.603857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.603884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.603912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.603942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.603972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.604979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.605987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.606985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.607014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.607044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.607076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.607106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.607135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.607163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.607192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.770 [2024-07-25 15:03:00.607236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.607984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.608978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.609971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.771 [2024-07-25 15:03:00.610280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.610996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.611997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.612929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.772 [2024-07-25 15:03:00.613695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.613724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.613753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.613787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.613819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.613848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.613878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.613927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.613957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.613989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.614995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.615989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.773 [2024-07-25 15:03:00.616400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.616981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.617990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.618973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.774 [2024-07-25 15:03:00.619617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.619645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.619681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.619711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 Message suppressed 999 times: [2024-07-25 15:03:00.620544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 Read completed with error (sct=0, sc=15) 00:07:08.775 [2024-07-25 15:03:00.620577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.620980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.621978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.622996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.623029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.623058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.623089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.623118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.775 [2024-07-25 15:03:00.623147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.623988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.624987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.625969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.776 [2024-07-25 15:03:00.626537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.626975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.627892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.628994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.777 [2024-07-25 15:03:00.629532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.629990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.630977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.631970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.778 [2024-07-25 15:03:00.632289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.632980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.633976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.634975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.779 [2024-07-25 15:03:00.635705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.635732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.635762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.635790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.635819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.635860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.635888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.635918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.635945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.635975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.636778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.637997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.780 [2024-07-25 15:03:00.638833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.638859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.638896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.638925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.638956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.638996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.639990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.640989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.641985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.642017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.642047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.642071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.642096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.642120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.781 [2024-07-25 15:03:00.642143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.642991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.643964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.782 [2024-07-25 15:03:00.644881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.644910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.644940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.644978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.645973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.646975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.647712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.648112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.648143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.648174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.648206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.648236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.783 [2024-07-25 15:03:00.648264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.648995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.649994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.650971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.784 [2024-07-25 15:03:00.651304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.651977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.652985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 true 00:07:08.785 [2024-07-25 15:03:00.653697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.653991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.785 [2024-07-25 15:03:00.654301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.654980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.786 [2024-07-25 15:03:00.655369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.655997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.656990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.786 [2024-07-25 15:03:00.657758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.657790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.657817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.657846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.657877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.657903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.657928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.657957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.657990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.658969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.659977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.787 [2024-07-25 15:03:00.660290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.660921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.661990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.662978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.788 [2024-07-25 15:03:00.663442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.663753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.664975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.665985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.666575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.667147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.667178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.789 [2024-07-25 15:03:00.667209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.667974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.668964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.669975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.670008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.670035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.790 [2024-07-25 15:03:00.670061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.670924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.671972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.672977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.673006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.791 [2024-07-25 15:03:00.673045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.673968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.674992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.675997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.676025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.676070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.676100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.676130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.792 [2024-07-25 15:03:00.676158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.676971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.677973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.678981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.679013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.679044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.679072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.679100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.679130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.679160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.679184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.793 [2024-07-25 15:03:00.679218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:08.794 [2024-07-25 15:03:00.679617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.679824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.794 [2024-07-25 15:03:00.680183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.680977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.681991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.682020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.682054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.682407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.682438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.682465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.682518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.682546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.794 [2024-07-25 15:03:00.682577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.682982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.683978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.684971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.795 [2024-07-25 15:03:00.685725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.685754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.685783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.685827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.685854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.685896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.685926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.685957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.685987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.686966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.687977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.688698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.689057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.689089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.689117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.796 [2024-07-25 15:03:00.689145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.797 [2024-07-25 15:03:00.689822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.689971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.690907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.797 [2024-07-25 15:03:00.691836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.691863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.691892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.691920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.691947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.691973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.692989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.693994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.798 [2024-07-25 15:03:00.694938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.694967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.694995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.695979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.696999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.697980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.799 [2024-07-25 15:03:00.698371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.698999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.699773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.700974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.800 [2024-07-25 15:03:00.701639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.701986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.702975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.703999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.801 [2024-07-25 15:03:00.704975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.705979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.706564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.802 [2024-07-25 15:03:00.707870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.707898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.707928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.707955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.707991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.708894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.709988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.710990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.711018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.803 [2024-07-25 15:03:00.711049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.711959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.712987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.713979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.804 [2024-07-25 15:03:00.714505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.714968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.715974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.805 [2024-07-25 15:03:00.716796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.716822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.716852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.716881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.716908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.716936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.716966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.716997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.717858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.718983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.806 [2024-07-25 15:03:00.719898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.719921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.719953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.719990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.720978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.721981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.807 [2024-07-25 15:03:00.722861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.722887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.722916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.722943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.722971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.723983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.724978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.808 [2024-07-25 15:03:00.725631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.808 [2024-07-25 15:03:00.725835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.725865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.725893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.725920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.725948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.725976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.726986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.727981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.728924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.809 [2024-07-25 15:03:00.729177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.729995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.730990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.731875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.732140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.732172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.732205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.732235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.732262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.732290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.732319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.810 [2024-07-25 15:03:00.732365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.732984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.733978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.734997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.735024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.811 [2024-07-25 15:03:00.735053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.735985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.736997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.737999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.738034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.738064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.738095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.738126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.738164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.738198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.738240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.738278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.812 [2024-07-25 15:03:00.738313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.738971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.739996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.740776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.813 [2024-07-25 15:03:00.741582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.741972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.742706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.743970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.814 [2024-07-25 15:03:00.744628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.744938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.745972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.746973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.747979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.748006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.748037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.748070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.748102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.815 [2024-07-25 15:03:00.748139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.748971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.749754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.750983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.751009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.751039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.751069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.751102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.816 [2024-07-25 15:03:00.751136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.751924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.752980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.753967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.817 [2024-07-25 15:03:00.754407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.754813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.754846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.754876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.754911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.754940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.754968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.754998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.755976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.756982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.818 [2024-07-25 15:03:00.757905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.757931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.757985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.758975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.759992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.819 [2024-07-25 15:03:00.760931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.760985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.819 [2024-07-25 15:03:00.761012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.761972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.762971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.763979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.820 [2024-07-25 15:03:00.764525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.764992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.765991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.766973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.767004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.767035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.767062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.767090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.767118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.767147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.767177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.767216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.821 [2024-07-25 15:03:00.767249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.767979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.768979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.769991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.822 [2024-07-25 15:03:00.770736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.770766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.770797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.770831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.770859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.770886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.770918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.770946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.770970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.771974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.772983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.823 [2024-07-25 15:03:00.773750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.773779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.773810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.773842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.773874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.773901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.773932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.773962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.773991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.774986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.775984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.824 [2024-07-25 15:03:00.776975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.777977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.778951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.779978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.825 [2024-07-25 15:03:00.780313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.780982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.781987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.782973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.826 [2024-07-25 15:03:00.783001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.783441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.784992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.785982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.827 [2024-07-25 15:03:00.786918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.786957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.786985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.787978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.788991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.789973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.790000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.790028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.828 [2024-07-25 15:03:00.790056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.790927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.791990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.792889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.829 [2024-07-25 15:03:00.793536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.793979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.794992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.795970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:08.830 [2024-07-25 15:03:00.796388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.830 [2024-07-25 15:03:00.796571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.796998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.797994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.798974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.831 [2024-07-25 15:03:00.799450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.799970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.800975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.801971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.832 [2024-07-25 15:03:00.802880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.802910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.802938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.802967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.802993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.803977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.804998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.805999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.806025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.806056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.833 [2024-07-25 15:03:00.806086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.806993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.807978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.808975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.834 [2024-07-25 15:03:00.809584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.809997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.810979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.811992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.812053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.812084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.812112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.812142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.812172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.812209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.812239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.812267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.835 [2024-07-25 15:03:00.812297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.812992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.813949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.814972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.836 [2024-07-25 15:03:00.815513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.815542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.815574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.815603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.815633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.815661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.816993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.817953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.818989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.837 [2024-07-25 15:03:00.819021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.819991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.820998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.821976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.838 [2024-07-25 15:03:00.822447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.822971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.823002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.823040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 [2024-07-25 15:03:00.823078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:08.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.839 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.100 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:09.100 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:09.100 true 00:07:09.100 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:09.100 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.361 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.361 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:09.361 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:09.622 true 00:07:09.622 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:09.622 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.883 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.883 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:09.883 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:10.144 true 00:07:10.144 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:10.144 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.404 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.404 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:10.404 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:10.665 true 00:07:10.665 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:10.665 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.665 15:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.925 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:10.925 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:11.185 true 00:07:11.185 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:11.185 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.185 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.474 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:11.474 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:11.735 true 00:07:11.735 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:11.735 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.735 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.997 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:11.997 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:11.997 true 00:07:12.259 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:12.259 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.259 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.520 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:12.520 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:12.520 true 00:07:12.781 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:12.781 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.781 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.046 15:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:13.046 15:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:13.046 true 00:07:13.046 15:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:13.046 15:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.988 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.249 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:14.249 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:14.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.249 true 00:07:14.249 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:14.249 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.510 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.771 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:14.771 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:14.771 true 00:07:14.771 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:14.771 15:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.031 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.292 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:15.292 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:15.292 true 00:07:15.292 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:15.292 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.553 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.814 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:15.814 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:15.814 true 00:07:15.814 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:15.814 15:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.074 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.074 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:16.335 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:16.335 true 00:07:16.335 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:16.335 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.595 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.595 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:16.595 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:16.856 true 00:07:16.856 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:16.856 15:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.116 15:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.116 15:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:17.116 15:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:17.375 true 00:07:17.375 15:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:17.375 15:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.315 15:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.574 15:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:18.574 15:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:18.574 true 00:07:18.574 15:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:18.574 15:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.512 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.771 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:19.771 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:19.771 true 00:07:19.771 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:19.771 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.031 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.291 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:20.291 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:20.291 true 00:07:20.291 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:20.291 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.551 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.551 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:20.551 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:20.811 true 00:07:20.811 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:20.811 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.071 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.071 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:21.071 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:21.341 true 00:07:21.341 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:21.341 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.607 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.607 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:21.607 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:21.867 true 00:07:21.867 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:21.867 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.127 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.127 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:22.127 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:22.388 true 00:07:22.388 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:22.388 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.388 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.649 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:22.649 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:22.910 true 00:07:22.910 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:22.910 15:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.910 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.170 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:23.170 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:23.430 true 00:07:23.430 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:23.430 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.430 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.689 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:23.689 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:23.950 true 00:07:23.950 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:23.950 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.950 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.210 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:24.210 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:24.210 true 00:07:24.470 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:24.470 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.470 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.730 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:24.730 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:24.730 true 00:07:24.730 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:24.730 15:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.989 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.249 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:25.249 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:25.249 true 00:07:25.249 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:25.249 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.509 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.769 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:25.769 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:25.769 true 00:07:25.769 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:25.769 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.028 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.287 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:26.288 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:26.288 true 00:07:26.288 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:26.288 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.547 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.807 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:26.807 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:26.807 true 00:07:26.807 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:26.807 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.067 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.327 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:27.327 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:27.327 true 00:07:27.327 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:27.327 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.587 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.587 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:27.587 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:27.846 true 00:07:27.846 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:27.846 15:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.785 Initializing NVMe Controllers 00:07:28.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.785 Controller IO queue size 128, less than required. 00:07:28.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.785 Controller IO queue size 128, less than required. 00:07:28.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:28.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:28.786 Initialization complete. Launching workers. 00:07:28.786 ======================================================== 00:07:28.786 Latency(us) 00:07:28.786 Device Information : IOPS MiB/s Average min max 00:07:28.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1384.50 0.68 20496.04 2114.55 1138671.30 00:07:28.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6470.07 3.16 19718.41 2187.13 400797.54 00:07:28.786 ======================================================== 00:07:28.786 Total : 7854.57 3.84 19855.48 2114.55 1138671.30 00:07:28.786 00:07:28.786 15:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.786 15:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:28.786 15:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:29.045 true 00:07:29.045 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 59262 00:07:29.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (59262) - No such process 00:07:29.045 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 59262 00:07:29.045 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.306 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.306 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:29.306 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:29.306 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:29.306 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.306 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:29.567 null0 00:07:29.567 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:29.567 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.567 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:29.567 null1 00:07:29.827 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:29.827 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.827 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:29.827 null2 00:07:29.827 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:29.827 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.827 15:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:30.086 null3 00:07:30.086 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.086 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.086 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:30.086 null4 00:07:30.086 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.086 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.086 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:30.347 null5 00:07:30.347 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.347 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.347 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:30.608 null6 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:30.608 null7 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.608 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 66410 66412 66415 66418 66421 66424 66427 66430 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.609 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.870 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.870 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.870 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.870 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.870 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.870 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.870 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.870 15:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.131 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.406 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.667 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.926 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.926 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.926 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.926 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.926 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.926 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.926 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.926 15:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.926 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.926 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.926 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.926 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.926 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.926 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.186 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.448 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.709 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.710 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.710 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.710 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.710 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.710 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.710 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.710 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.710 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.710 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.969 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.969 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.969 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.969 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.969 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.969 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.969 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.969 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.970 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.970 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.970 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.970 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.970 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.970 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.229 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.488 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.747 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.748 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.748 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.748 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.748 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.748 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.006 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.006 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.006 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.007 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.267 rmmod nvme_tcp 00:07:34.267 rmmod nvme_fabrics 00:07:34.267 rmmod nvme_keyring 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 58792 ']' 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 58792 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 58792 ']' 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 58792 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:34.267 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58792 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58792' 00:07:34.528 killing process with pid 58792 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 58792 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 58792 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.528 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.073 00:07:37.073 real 0m48.301s 00:07:37.073 user 3m14.694s 00:07:37.073 sys 0m15.769s 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.073 ************************************ 00:07:37.073 END TEST nvmf_ns_hotplug_stress 00:07:37.073 ************************************ 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.073 ************************************ 00:07:37.073 START TEST nvmf_delete_subsystem 00:07:37.073 ************************************ 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:37.073 * Looking for test storage... 00:07:37.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.073 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:43.657 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:43.657 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:43.657 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:43.657 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.657 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.658 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:07:43.919 00:07:43.919 --- 10.0.0.2 ping statistics --- 00:07:43.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.919 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:07:43.919 00:07:43.919 --- 10.0.0.1 ping statistics --- 00:07:43.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.919 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71549 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71549 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 71549 ']' 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.919 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.919 [2024-07-25 15:03:35.966432] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:43.919 [2024-07-25 15:03:35.966480] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.919 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.919 [2024-07-25 15:03:36.031858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:43.919 [2024-07-25 15:03:36.096218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.919 [2024-07-25 15:03:36.096255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.919 [2024-07-25 15:03:36.096263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.919 [2024-07-25 15:03:36.096269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.919 [2024-07-25 15:03:36.096275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.919 [2024-07-25 15:03:36.096413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.919 [2024-07-25 15:03:36.096506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 [2024-07-25 15:03:36.767696] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 [2024-07-25 15:03:36.783864] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 NULL1 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 Delay0 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71758 00:07:44.863 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:44.863 15:03:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:44.863 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.863 [2024-07-25 15:03:36.868546] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:46.772 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.772 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.772 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 [2024-07-25 15:03:39.092959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d8710 is same with the state(5) to be set 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 starting I/O failed: -6 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 [2024-07-25 15:03:39.096775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb580000c00 is same with the state(5) to be set 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Read completed with error (sct=0, sc=8) 00:07:47.034 Write completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.035 Write completed with error (sct=0, sc=8) 00:07:47.035 Read completed with error (sct=0, sc=8) 00:07:47.978 [2024-07-25 15:03:40.050229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d9ac0 is same with the state(5) to be set 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 [2024-07-25 15:03:40.096977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d8a40 is same with the state(5) to be set 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 [2024-07-25 15:03:40.097073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d83e0 is same with the state(5) to be set 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 [2024-07-25 15:03:40.099208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb58000d000 is same with the state(5) to be set 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Write completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 Read completed with error (sct=0, sc=8) 00:07:47.978 [2024-07-25 15:03:40.099445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb58000d660 is same with the state(5) to be set 00:07:47.978 Initializing NVMe Controllers 00:07:47.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:47.978 Controller IO queue size 128, less than required. 00:07:47.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:47.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:47.978 Initialization complete. Launching workers. 00:07:47.978 ======================================================== 00:07:47.978 Latency(us) 00:07:47.978 Device Information : IOPS MiB/s Average min max 00:07:47.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.76 0.08 908977.70 205.57 1007636.15 00:07:47.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.28 0.08 924415.01 282.69 1043121.06 00:07:47.978 ======================================================== 00:07:47.978 Total : 321.04 0.16 916588.65 205.57 1043121.06 00:07:47.978 00:07:47.978 [2024-07-25 15:03:40.100028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d9ac0 (9): Bad file descriptor 00:07:47.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:47.978 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.978 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:47.978 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71758 00:07:47.978 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71758 00:07:48.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71758) - No such process 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71758 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 71758 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 71758 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.551 [2024-07-25 15:03:40.629338] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=72582 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72582 00:07:48.551 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.551 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.551 [2024-07-25 15:03:40.699395] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:49.123 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.123 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72582 00:07:49.123 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.722 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.722 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72582 00:07:49.722 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.983 15:03:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.983 15:03:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72582 00:07:49.983 15:03:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:50.553 15:03:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:50.553 15:03:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72582 00:07:50.553 15:03:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:51.123 15:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:51.123 15:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72582 00:07:51.123 15:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:51.696 15:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:51.696 15:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72582 00:07:51.696 15:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:51.956 Initializing NVMe Controllers 00:07:51.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:51.956 Controller IO queue size 128, less than required. 00:07:51.956 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:51.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:51.956 Initialization complete. Launching workers. 00:07:51.956 ======================================================== 00:07:51.957 Latency(us) 00:07:51.957 Device Information : IOPS MiB/s Average min max 00:07:51.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003006.94 1000328.31 1010680.95 00:07:51.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003508.57 1000365.60 1009732.27 00:07:51.957 ======================================================== 00:07:51.957 Total : 256.00 0.12 1003257.76 1000328.31 1010680.95 00:07:51.957 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72582 00:07:52.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (72582) - No such process 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 72582 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.217 rmmod nvme_tcp 00:07:52.217 rmmod nvme_fabrics 00:07:52.217 rmmod nvme_keyring 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71549 ']' 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71549 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 71549 ']' 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 71549 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71549 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71549' 00:07:52.217 killing process with pid 71549 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 71549 00:07:52.217 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 71549 00:07:52.478 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:52.479 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:52.479 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:52.479 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.479 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.479 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.479 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.479 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.394 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:54.394 00:07:54.394 real 0m17.730s 00:07:54.394 user 0m30.878s 00:07:54.394 sys 0m6.061s 00:07:54.394 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.394 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.394 ************************************ 00:07:54.394 END TEST nvmf_delete_subsystem 00:07:54.394 ************************************ 00:07:54.394 15:03:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:54.394 15:03:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:54.394 15:03:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.394 15:03:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.657 ************************************ 00:07:54.657 START TEST nvmf_host_management 00:07:54.657 ************************************ 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:54.657 * Looking for test storage... 00:07:54.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.657 15:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.801 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:02.802 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:02.802 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:02.802 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:02.802 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:08:02.802 00:08:02.802 --- 10.0.0.2 ping statistics --- 00:08:02.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.802 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:08:02.802 00:08:02.802 --- 10.0.0.1 ping statistics --- 00:08:02.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.802 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=77361 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 77361 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 77361 ']' 00:08:02.802 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.803 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.803 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.803 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.803 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.803 [2024-07-25 15:03:53.948431] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:02.803 [2024-07-25 15:03:53.948497] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.803 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.803 [2024-07-25 15:03:54.040444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.803 [2024-07-25 15:03:54.137359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.803 [2024-07-25 15:03:54.137424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.803 [2024-07-25 15:03:54.137433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.803 [2024-07-25 15:03:54.137440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.803 [2024-07-25 15:03:54.137447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.803 [2024-07-25 15:03:54.137587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.803 [2024-07-25 15:03:54.137754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.803 [2024-07-25 15:03:54.137919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.803 [2024-07-25 15:03:54.137921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.803 [2024-07-25 15:03:54.777159] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.803 Malloc0 00:08:02.803 [2024-07-25 15:03:54.836414] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=77661 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 77661 /var/tmp/bdevperf.sock 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 77661 ']' 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:02.803 { 00:08:02.803 "params": { 00:08:02.803 "name": "Nvme$subsystem", 00:08:02.803 "trtype": "$TEST_TRANSPORT", 00:08:02.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.803 "adrfam": "ipv4", 00:08:02.803 "trsvcid": "$NVMF_PORT", 00:08:02.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.803 "hdgst": ${hdgst:-false}, 00:08:02.803 "ddgst": ${ddgst:-false} 00:08:02.803 }, 00:08:02.803 "method": "bdev_nvme_attach_controller" 00:08:02.803 } 00:08:02.803 EOF 00:08:02.803 )") 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:02.803 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:02.803 "params": { 00:08:02.803 "name": "Nvme0", 00:08:02.803 "trtype": "tcp", 00:08:02.803 "traddr": "10.0.0.2", 00:08:02.803 "adrfam": "ipv4", 00:08:02.803 "trsvcid": "4420", 00:08:02.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:02.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:02.803 "hdgst": false, 00:08:02.803 "ddgst": false 00:08:02.803 }, 00:08:02.803 "method": "bdev_nvme_attach_controller" 00:08:02.803 }' 00:08:02.803 [2024-07-25 15:03:54.937016] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:02.803 [2024-07-25 15:03:54.937067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77661 ] 00:08:02.803 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.064 [2024-07-25 15:03:54.995895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.064 [2024-07-25 15:03:55.060798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.324 Running I/O for 10 seconds... 00:08:03.585 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.586 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.848 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:08:03.848 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:08:03.848 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:03.848 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:03.848 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:03.848 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:03.848 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.848 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.848 [2024-07-25 15:03:55.783404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.848 [2024-07-25 15:03:55.783648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.783903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a832a0 is same with the state(5) to be set 00:08:03.849 [2024-07-25 15:03:55.784430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.849 [2024-07-25 15:03:55.784750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.849 [2024-07-25 15:03:55.784759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.784989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.784997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.850 [2024-07-25 15:03:55.785331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.850 [2024-07-25 15:03:55.785338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.851 [2024-07-25 15:03:55.785537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.851 [2024-07-25 15:03:55.785546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28e04f0 is same with the state(5) to be set 00:08:03.851 [2024-07-25 15:03:55.785587] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28e04f0 was disconnected and freed. reset controller. 00:08:03.851 [2024-07-25 15:03:55.786806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:03.851 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.851 task offset: 49792 on job bdev=Nvme0n1 fails 00:08:03.851 00:08:03.851 Latency(us) 00:08:03.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.851 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:03.851 Job: Nvme0n1 ended in about 0.45 seconds with error 00:08:03.851 Verification LBA range: start 0x0 length 0x400 00:08:03.851 Nvme0n1 : 0.45 857.53 53.60 141.08 0.00 62477.53 8574.29 54394.88 00:08:03.851 =================================================================================================================== 00:08:03.851 Total : 857.53 53.60 141.08 0.00 62477.53 8574.29 54394.88 00:08:03.851 [2024-07-25 15:03:55.788808] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.851 [2024-07-25 15:03:55.788832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cf3b0 (9): Bad file descriptor 00:08:03.851 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:03.851 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.851 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.851 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.851 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:03.851 [2024-07-25 15:03:55.804658] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 77661 00:08:04.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (77661) - No such process 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:04.793 { 00:08:04.793 "params": { 00:08:04.793 "name": "Nvme$subsystem", 00:08:04.793 "trtype": "$TEST_TRANSPORT", 00:08:04.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.793 "adrfam": "ipv4", 00:08:04.793 "trsvcid": "$NVMF_PORT", 00:08:04.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.793 "hdgst": ${hdgst:-false}, 00:08:04.793 "ddgst": ${ddgst:-false} 00:08:04.793 }, 00:08:04.793 "method": "bdev_nvme_attach_controller" 00:08:04.793 } 00:08:04.793 EOF 00:08:04.793 )") 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:04.793 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:04.793 "params": { 00:08:04.793 "name": "Nvme0", 00:08:04.793 "trtype": "tcp", 00:08:04.793 "traddr": "10.0.0.2", 00:08:04.793 "adrfam": "ipv4", 00:08:04.793 "trsvcid": "4420", 00:08:04.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:04.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:04.793 "hdgst": false, 00:08:04.793 "ddgst": false 00:08:04.793 }, 00:08:04.793 "method": "bdev_nvme_attach_controller" 00:08:04.793 }' 00:08:04.793 [2024-07-25 15:03:56.857301] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:04.793 [2024-07-25 15:03:56.857374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78015 ] 00:08:04.793 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.793 [2024-07-25 15:03:56.924452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.054 [2024-07-25 15:03:56.988667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.054 Running I/O for 1 seconds... 00:08:05.997 00:08:05.997 Latency(us) 00:08:05.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.997 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:05.997 Verification LBA range: start 0x0 length 0x400 00:08:05.997 Nvme0n1 : 1.01 1450.83 90.68 0.00 0.00 43387.66 11031.89 41943.04 00:08:05.997 =================================================================================================================== 00:08:05.997 Total : 1450.83 90.68 0.00 0.00 43387.66 11031.89 41943.04 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:06.257 rmmod nvme_tcp 00:08:06.257 rmmod nvme_fabrics 00:08:06.257 rmmod nvme_keyring 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:06.257 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 77361 ']' 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 77361 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 77361 ']' 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 77361 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77361 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77361' 00:08:06.258 killing process with pid 77361 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 77361 00:08:06.258 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 77361 00:08:06.518 [2024-07-25 15:03:58.541944] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:06.518 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.518 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.518 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.518 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.518 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.518 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.518 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.518 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:09.064 00:08:09.064 real 0m14.044s 00:08:09.064 user 0m22.206s 00:08:09.064 sys 0m6.229s 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.064 ************************************ 00:08:09.064 END TEST nvmf_host_management 00:08:09.064 ************************************ 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.064 ************************************ 00:08:09.064 START TEST nvmf_lvol 00:08:09.064 ************************************ 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:09.064 * Looking for test storage... 00:08:09.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.064 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.065 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.658 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.658 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.658 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.658 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.658 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.658 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.658 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.658 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.658 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:15.659 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:15.659 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:15.659 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:15.659 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:08:15.659 00:08:15.659 --- 10.0.0.2 ping statistics --- 00:08:15.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.659 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.461 ms 00:08:15.659 00:08:15.659 --- 10.0.0.1 ping statistics --- 00:08:15.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.659 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=82358 00:08:15.659 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 82358 00:08:15.660 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:15.660 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 82358 ']' 00:08:15.660 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.660 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.660 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.660 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.660 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.660 [2024-07-25 15:04:07.727015] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:15.660 [2024-07-25 15:04:07.727068] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.660 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.660 [2024-07-25 15:04:07.796385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.921 [2024-07-25 15:04:07.868443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.921 [2024-07-25 15:04:07.868482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.921 [2024-07-25 15:04:07.868489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.921 [2024-07-25 15:04:07.868495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.921 [2024-07-25 15:04:07.868501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.921 [2024-07-25 15:04:07.868654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.921 [2024-07-25 15:04:07.868765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.921 [2024-07-25 15:04:07.868768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.493 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.493 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:16.493 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.493 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.493 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:16.493 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.493 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:16.753 [2024-07-25 15:04:08.685075] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.753 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:16.753 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:16.753 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.013 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:17.013 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:17.273 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:17.273 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f6aaed59-454c-41c1-be4a-50c7b3666f54 00:08:17.273 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f6aaed59-454c-41c1-be4a-50c7b3666f54 lvol 20 00:08:17.534 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=71a7e537-9bfb-4658-b7eb-7355f5d41618 00:08:17.534 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:17.794 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 71a7e537-9bfb-4658-b7eb-7355f5d41618 00:08:17.794 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:18.054 [2024-07-25 15:04:10.081460] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.054 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.314 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=83059 00:08:18.314 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:18.314 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:18.314 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.256 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 71a7e537-9bfb-4658-b7eb-7355f5d41618 MY_SNAPSHOT 00:08:19.516 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0ef12d5d-3717-4a52-be33-3b5d6b33f8f0 00:08:19.516 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 71a7e537-9bfb-4658-b7eb-7355f5d41618 30 00:08:19.516 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0ef12d5d-3717-4a52-be33-3b5d6b33f8f0 MY_CLONE 00:08:19.777 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=72745a7e-41d8-4218-9f1d-0622b3eb2590 00:08:19.777 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 72745a7e-41d8-4218-9f1d-0622b3eb2590 00:08:20.037 15:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 83059 00:08:30.040 Initializing NVMe Controllers 00:08:30.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:30.040 Controller IO queue size 128, less than required. 00:08:30.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:30.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:30.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:30.040 Initialization complete. Launching workers. 00:08:30.040 ======================================================== 00:08:30.040 Latency(us) 00:08:30.040 Device Information : IOPS MiB/s Average min max 00:08:30.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11171.30 43.64 11463.08 1500.16 42061.96 00:08:30.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18267.10 71.36 7008.19 1225.48 44462.69 00:08:30.040 ======================================================== 00:08:30.040 Total : 29438.39 114.99 8698.73 1225.48 44462.69 00:08:30.040 00:08:30.040 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.040 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 71a7e537-9bfb-4658-b7eb-7355f5d41618 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f6aaed59-454c-41c1-be4a-50c7b3666f54 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.040 rmmod nvme_tcp 00:08:30.040 rmmod nvme_fabrics 00:08:30.040 rmmod nvme_keyring 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 82358 ']' 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 82358 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 82358 ']' 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 82358 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82358 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82358' 00:08:30.040 killing process with pid 82358 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 82358 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 82358 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.040 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:31.954 00:08:31.954 real 0m22.912s 00:08:31.954 user 1m3.909s 00:08:31.954 sys 0m7.616s 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.954 ************************************ 00:08:31.954 END TEST nvmf_lvol 00:08:31.954 ************************************ 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.954 ************************************ 00:08:31.954 START TEST nvmf_lvs_grow 00:08:31.954 ************************************ 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:31.954 * Looking for test storage... 00:08:31.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.954 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.539 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:38.540 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:38.540 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:38.540 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:38.540 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.540 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:08:38.801 00:08:38.801 --- 10.0.0.2 ping statistics --- 00:08:38.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.801 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:08:38.801 00:08:38.801 --- 10.0.0.1 ping statistics --- 00:08:38.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.801 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.801 15:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=89411 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 89411 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 89411 ']' 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.062 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.062 [2024-07-25 15:04:31.068624] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:39.062 [2024-07-25 15:04:31.068673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.062 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.062 [2024-07-25 15:04:31.134439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.062 [2024-07-25 15:04:31.198279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.062 [2024-07-25 15:04:31.198316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.062 [2024-07-25 15:04:31.198323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.062 [2024-07-25 15:04:31.198330] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.062 [2024-07-25 15:04:31.198335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.062 [2024-07-25 15:04:31.198359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.005 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.005 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:40.005 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.005 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.005 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.005 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.005 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:40.005 [2024-07-25 15:04:32.029264] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.005 ************************************ 00:08:40.005 START TEST lvs_grow_clean 00:08:40.005 ************************************ 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.005 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.266 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:40.266 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:40.266 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:40.266 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:40.266 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:40.527 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:40.527 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:40.527 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f lvol 150 00:08:40.788 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dbf2973e-47fb-4526-b35d-a49c1194dd61 00:08:40.788 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.788 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:40.788 [2024-07-25 15:04:32.893666] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:40.788 [2024-07-25 15:04:32.893719] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:40.788 true 00:08:40.788 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:40.788 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:41.109 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:41.109 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.109 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dbf2973e-47fb-4526-b35d-a49c1194dd61 00:08:41.369 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:41.369 [2024-07-25 15:04:33.491517] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.370 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89823 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89823 /var/tmp/bdevperf.sock 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 89823 ']' 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:41.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:41.630 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:41.630 [2024-07-25 15:04:33.705873] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:41.630 [2024-07-25 15:04:33.705915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89823 ] 00:08:41.630 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.630 [2024-07-25 15:04:33.773357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.891 [2024-07-25 15:04:33.837507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.462 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.462 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:42.462 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:42.723 Nvme0n1 00:08:42.723 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:42.723 [ 00:08:42.723 { 00:08:42.723 "name": "Nvme0n1", 00:08:42.723 "aliases": [ 00:08:42.723 "dbf2973e-47fb-4526-b35d-a49c1194dd61" 00:08:42.723 ], 00:08:42.723 "product_name": "NVMe disk", 00:08:42.723 "block_size": 4096, 00:08:42.723 "num_blocks": 38912, 00:08:42.723 "uuid": "dbf2973e-47fb-4526-b35d-a49c1194dd61", 00:08:42.723 "assigned_rate_limits": { 00:08:42.723 "rw_ios_per_sec": 0, 00:08:42.723 "rw_mbytes_per_sec": 0, 00:08:42.723 "r_mbytes_per_sec": 0, 00:08:42.723 "w_mbytes_per_sec": 0 00:08:42.723 }, 00:08:42.723 "claimed": false, 00:08:42.723 "zoned": false, 00:08:42.723 "supported_io_types": { 00:08:42.723 "read": true, 00:08:42.723 "write": true, 00:08:42.723 "unmap": true, 00:08:42.723 "flush": true, 00:08:42.723 "reset": true, 00:08:42.723 "nvme_admin": true, 00:08:42.723 "nvme_io": true, 00:08:42.723 "nvme_io_md": false, 00:08:42.723 "write_zeroes": true, 00:08:42.723 "zcopy": false, 00:08:42.723 "get_zone_info": false, 00:08:42.723 "zone_management": false, 00:08:42.723 "zone_append": false, 00:08:42.723 "compare": true, 00:08:42.723 "compare_and_write": true, 00:08:42.723 "abort": true, 00:08:42.723 "seek_hole": false, 00:08:42.723 "seek_data": false, 00:08:42.723 "copy": true, 00:08:42.723 "nvme_iov_md": false 00:08:42.723 }, 00:08:42.723 "memory_domains": [ 00:08:42.723 { 00:08:42.723 "dma_device_id": "system", 00:08:42.723 "dma_device_type": 1 00:08:42.723 } 00:08:42.723 ], 00:08:42.723 "driver_specific": { 00:08:42.723 "nvme": [ 00:08:42.723 { 00:08:42.723 "trid": { 00:08:42.723 "trtype": "TCP", 00:08:42.723 "adrfam": "IPv4", 00:08:42.723 "traddr": "10.0.0.2", 00:08:42.723 "trsvcid": "4420", 00:08:42.723 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:42.723 }, 00:08:42.723 "ctrlr_data": { 00:08:42.723 "cntlid": 1, 00:08:42.723 "vendor_id": "0x8086", 00:08:42.723 "model_number": "SPDK bdev Controller", 00:08:42.723 "serial_number": "SPDK0", 00:08:42.723 "firmware_revision": "24.09", 00:08:42.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.723 "oacs": { 00:08:42.723 "security": 0, 00:08:42.723 "format": 0, 00:08:42.723 "firmware": 0, 00:08:42.723 "ns_manage": 0 00:08:42.723 }, 00:08:42.723 "multi_ctrlr": true, 00:08:42.723 "ana_reporting": false 00:08:42.723 }, 00:08:42.723 "vs": { 00:08:42.723 "nvme_version": "1.3" 00:08:42.723 }, 00:08:42.723 "ns_data": { 00:08:42.723 "id": 1, 00:08:42.723 "can_share": true 00:08:42.723 } 00:08:42.723 } 00:08:42.723 ], 00:08:42.723 "mp_policy": "active_passive" 00:08:42.723 } 00:08:42.723 } 00:08:42.723 ] 00:08:42.723 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:42.723 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=90143 00:08:42.723 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:42.723 Running I/O for 10 seconds... 00:08:44.107 Latency(us) 00:08:44.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.108 Nvme0n1 : 1.00 17448.00 68.16 0.00 0.00 0.00 0.00 0.00 00:08:44.108 =================================================================================================================== 00:08:44.108 Total : 17448.00 68.16 0.00 0.00 0.00 0.00 0.00 00:08:44.108 00:08:44.679 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:44.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.940 Nvme0n1 : 2.00 17564.00 68.61 0.00 0.00 0.00 0.00 0.00 00:08:44.940 =================================================================================================================== 00:08:44.940 Total : 17564.00 68.61 0.00 0.00 0.00 0.00 0.00 00:08:44.940 00:08:44.940 true 00:08:44.940 15:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:44.940 15:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:45.201 15:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:45.201 15:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:45.201 15:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 90143 00:08:45.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.771 Nvme0n1 : 3.00 17610.67 68.79 0.00 0.00 0.00 0.00 0.00 00:08:45.771 =================================================================================================================== 00:08:45.771 Total : 17610.67 68.79 0.00 0.00 0.00 0.00 0.00 00:08:45.771 00:08:47.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.158 Nvme0n1 : 4.00 17646.00 68.93 0.00 0.00 0.00 0.00 0.00 00:08:47.158 =================================================================================================================== 00:08:47.158 Total : 17646.00 68.93 0.00 0.00 0.00 0.00 0.00 00:08:47.158 00:08:47.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.728 Nvme0n1 : 5.00 17672.00 69.03 0.00 0.00 0.00 0.00 0.00 00:08:47.728 =================================================================================================================== 00:08:47.728 Total : 17672.00 69.03 0.00 0.00 0.00 0.00 0.00 00:08:47.728 00:08:49.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.112 Nvme0n1 : 6.00 17685.33 69.08 0.00 0.00 0.00 0.00 0.00 00:08:49.112 =================================================================================================================== 00:08:49.112 Total : 17685.33 69.08 0.00 0.00 0.00 0.00 0.00 00:08:49.112 00:08:50.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.054 Nvme0n1 : 7.00 17702.86 69.15 0.00 0.00 0.00 0.00 0.00 00:08:50.054 =================================================================================================================== 00:08:50.054 Total : 17702.86 69.15 0.00 0.00 0.00 0.00 0.00 00:08:50.054 00:08:50.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.996 Nvme0n1 : 8.00 17718.00 69.21 0.00 0.00 0.00 0.00 0.00 00:08:50.996 =================================================================================================================== 00:08:50.996 Total : 17718.00 69.21 0.00 0.00 0.00 0.00 0.00 00:08:50.996 00:08:51.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.937 Nvme0n1 : 9.00 17732.44 69.27 0.00 0.00 0.00 0.00 0.00 00:08:51.937 =================================================================================================================== 00:08:51.937 Total : 17732.44 69.27 0.00 0.00 0.00 0.00 0.00 00:08:51.937 00:08:52.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.877 Nvme0n1 : 10.00 17743.20 69.31 0.00 0.00 0.00 0.00 0.00 00:08:52.877 =================================================================================================================== 00:08:52.877 Total : 17743.20 69.31 0.00 0.00 0.00 0.00 0.00 00:08:52.877 00:08:52.877 00:08:52.877 Latency(us) 00:08:52.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.877 Nvme0n1 : 10.01 17743.46 69.31 0.00 0.00 7208.87 2594.13 13544.11 00:08:52.877 =================================================================================================================== 00:08:52.877 Total : 17743.46 69.31 0.00 0.00 7208.87 2594.13 13544.11 00:08:52.877 0 00:08:52.877 15:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89823 00:08:52.877 15:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 89823 ']' 00:08:52.877 15:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 89823 00:08:52.877 15:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:52.877 15:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.877 15:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89823 00:08:52.877 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:52.877 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:52.877 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89823' 00:08:52.877 killing process with pid 89823 00:08:52.877 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 89823 00:08:52.877 Received shutdown signal, test time was about 10.000000 seconds 00:08:52.877 00:08:52.877 Latency(us) 00:08:52.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.877 =================================================================================================================== 00:08:52.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:52.877 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 89823 00:08:53.137 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.137 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:53.397 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:53.397 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.657 [2024-07-25 15:04:45.786782] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.657 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.918 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.918 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.918 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:53.918 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:53.918 request: 00:08:53.918 { 00:08:53.918 "uuid": "e684009a-a8b8-4be6-b5a6-1a7ebd71147f", 00:08:53.918 "method": "bdev_lvol_get_lvstores", 00:08:53.918 "req_id": 1 00:08:53.918 } 00:08:53.918 Got JSON-RPC error response 00:08:53.918 response: 00:08:53.918 { 00:08:53.918 "code": -19, 00:08:53.918 "message": "No such device" 00:08:53.918 } 00:08:53.918 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:53.918 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.918 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.918 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.918 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.178 aio_bdev 00:08:54.178 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dbf2973e-47fb-4526-b35d-a49c1194dd61 00:08:54.179 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=dbf2973e-47fb-4526-b35d-a49c1194dd61 00:08:54.179 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.179 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:54.179 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.179 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.179 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.179 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dbf2973e-47fb-4526-b35d-a49c1194dd61 -t 2000 00:08:54.439 [ 00:08:54.439 { 00:08:54.439 "name": "dbf2973e-47fb-4526-b35d-a49c1194dd61", 00:08:54.439 "aliases": [ 00:08:54.439 "lvs/lvol" 00:08:54.439 ], 00:08:54.439 "product_name": "Logical Volume", 00:08:54.439 "block_size": 4096, 00:08:54.439 "num_blocks": 38912, 00:08:54.439 "uuid": "dbf2973e-47fb-4526-b35d-a49c1194dd61", 00:08:54.439 "assigned_rate_limits": { 00:08:54.439 "rw_ios_per_sec": 0, 00:08:54.439 "rw_mbytes_per_sec": 0, 00:08:54.439 "r_mbytes_per_sec": 0, 00:08:54.439 "w_mbytes_per_sec": 0 00:08:54.439 }, 00:08:54.439 "claimed": false, 00:08:54.439 "zoned": false, 00:08:54.439 "supported_io_types": { 00:08:54.439 "read": true, 00:08:54.439 "write": true, 00:08:54.439 "unmap": true, 00:08:54.439 "flush": false, 00:08:54.439 "reset": true, 00:08:54.439 "nvme_admin": false, 00:08:54.439 "nvme_io": false, 00:08:54.439 "nvme_io_md": false, 00:08:54.439 "write_zeroes": true, 00:08:54.439 "zcopy": false, 00:08:54.439 "get_zone_info": false, 00:08:54.439 "zone_management": false, 00:08:54.439 "zone_append": false, 00:08:54.439 "compare": false, 00:08:54.439 "compare_and_write": false, 00:08:54.439 "abort": false, 00:08:54.439 "seek_hole": true, 00:08:54.439 "seek_data": true, 00:08:54.439 "copy": false, 00:08:54.439 "nvme_iov_md": false 00:08:54.439 }, 00:08:54.439 "driver_specific": { 00:08:54.439 "lvol": { 00:08:54.439 "lvol_store_uuid": "e684009a-a8b8-4be6-b5a6-1a7ebd71147f", 00:08:54.439 "base_bdev": "aio_bdev", 00:08:54.439 "thin_provision": false, 00:08:54.439 "num_allocated_clusters": 38, 00:08:54.439 "snapshot": false, 00:08:54.439 "clone": false, 00:08:54.439 "esnap_clone": false 00:08:54.439 } 00:08:54.439 } 00:08:54.439 } 00:08:54.439 ] 00:08:54.439 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:54.439 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:54.439 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:54.439 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:54.439 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:54.439 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:54.699 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:54.699 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dbf2973e-47fb-4526-b35d-a49c1194dd61 00:08:54.959 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e684009a-a8b8-4be6-b5a6-1a7ebd71147f 00:08:54.959 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.219 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.219 00:08:55.219 real 0m15.225s 00:08:55.219 user 0m14.854s 00:08:55.219 sys 0m1.324s 00:08:55.219 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.219 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:55.219 ************************************ 00:08:55.219 END TEST lvs_grow_clean 00:08:55.220 ************************************ 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.220 ************************************ 00:08:55.220 START TEST lvs_grow_dirty 00:08:55.220 ************************************ 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.220 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.480 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:55.481 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:55.741 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:08:55.741 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:08:55.741 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:55.741 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:55.741 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:55.741 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e6e75e41-c881-4a71-8ef5-f87d27b925ba lvol 150 00:08:56.008 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a7999b18-8ce0-43b6-8f5d-12a8726dda0b 00:08:56.008 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.008 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:56.272 [2024-07-25 15:04:48.209796] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:56.272 [2024-07-25 15:04:48.209850] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:56.272 true 00:08:56.272 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:08:56.272 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:56.272 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:56.272 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:56.533 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a7999b18-8ce0-43b6-8f5d-12a8726dda0b 00:08:56.533 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:56.794 [2024-07-25 15:04:48.827704] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=92902 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 92902 /var/tmp/bdevperf.sock 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 92902 ']' 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.794 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.112 15:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.112 [2024-07-25 15:04:49.030660] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:57.112 [2024-07-25 15:04:49.030712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92902 ] 00:08:57.112 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.112 [2024-07-25 15:04:49.106790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.112 [2024-07-25 15:04:49.160605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.687 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.687 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:57.687 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:58.259 Nvme0n1 00:08:58.259 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:58.259 [ 00:08:58.259 { 00:08:58.259 "name": "Nvme0n1", 00:08:58.259 "aliases": [ 00:08:58.259 "a7999b18-8ce0-43b6-8f5d-12a8726dda0b" 00:08:58.259 ], 00:08:58.259 "product_name": "NVMe disk", 00:08:58.259 "block_size": 4096, 00:08:58.259 "num_blocks": 38912, 00:08:58.259 "uuid": "a7999b18-8ce0-43b6-8f5d-12a8726dda0b", 00:08:58.259 "assigned_rate_limits": { 00:08:58.259 "rw_ios_per_sec": 0, 00:08:58.259 "rw_mbytes_per_sec": 0, 00:08:58.259 "r_mbytes_per_sec": 0, 00:08:58.259 "w_mbytes_per_sec": 0 00:08:58.259 }, 00:08:58.259 "claimed": false, 00:08:58.259 "zoned": false, 00:08:58.259 "supported_io_types": { 00:08:58.259 "read": true, 00:08:58.259 "write": true, 00:08:58.259 "unmap": true, 00:08:58.259 "flush": true, 00:08:58.259 "reset": true, 00:08:58.259 "nvme_admin": true, 00:08:58.259 "nvme_io": true, 00:08:58.259 "nvme_io_md": false, 00:08:58.259 "write_zeroes": true, 00:08:58.259 "zcopy": false, 00:08:58.259 "get_zone_info": false, 00:08:58.259 "zone_management": false, 00:08:58.259 "zone_append": false, 00:08:58.259 "compare": true, 00:08:58.259 "compare_and_write": true, 00:08:58.259 "abort": true, 00:08:58.259 "seek_hole": false, 00:08:58.259 "seek_data": false, 00:08:58.259 "copy": true, 00:08:58.259 "nvme_iov_md": false 00:08:58.259 }, 00:08:58.259 "memory_domains": [ 00:08:58.259 { 00:08:58.259 "dma_device_id": "system", 00:08:58.259 "dma_device_type": 1 00:08:58.259 } 00:08:58.259 ], 00:08:58.259 "driver_specific": { 00:08:58.259 "nvme": [ 00:08:58.259 { 00:08:58.259 "trid": { 00:08:58.259 "trtype": "TCP", 00:08:58.259 "adrfam": "IPv4", 00:08:58.259 "traddr": "10.0.0.2", 00:08:58.259 "trsvcid": "4420", 00:08:58.259 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:58.259 }, 00:08:58.259 "ctrlr_data": { 00:08:58.259 "cntlid": 1, 00:08:58.259 "vendor_id": "0x8086", 00:08:58.259 "model_number": "SPDK bdev Controller", 00:08:58.259 "serial_number": "SPDK0", 00:08:58.259 "firmware_revision": "24.09", 00:08:58.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:58.259 "oacs": { 00:08:58.259 "security": 0, 00:08:58.259 "format": 0, 00:08:58.259 "firmware": 0, 00:08:58.259 "ns_manage": 0 00:08:58.259 }, 00:08:58.259 "multi_ctrlr": true, 00:08:58.259 "ana_reporting": false 00:08:58.259 }, 00:08:58.259 "vs": { 00:08:58.259 "nvme_version": "1.3" 00:08:58.259 }, 00:08:58.259 "ns_data": { 00:08:58.259 "id": 1, 00:08:58.259 "can_share": true 00:08:58.259 } 00:08:58.259 } 00:08:58.259 ], 00:08:58.259 "mp_policy": "active_passive" 00:08:58.259 } 00:08:58.259 } 00:08:58.259 ] 00:08:58.259 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=93233 00:08:58.259 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:58.259 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:58.259 Running I/O for 10 seconds... 00:08:59.645 Latency(us) 00:08:59.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.645 Nvme0n1 : 1.00 17414.00 68.02 0.00 0.00 0.00 0.00 0.00 00:08:59.645 =================================================================================================================== 00:08:59.645 Total : 17414.00 68.02 0.00 0.00 0.00 0.00 0.00 00:08:59.645 00:09:00.216 15:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:00.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.477 Nvme0n1 : 2.00 17551.00 68.56 0.00 0.00 0.00 0.00 0.00 00:09:00.477 =================================================================================================================== 00:09:00.477 Total : 17551.00 68.56 0.00 0.00 0.00 0.00 0.00 00:09:00.477 00:09:00.477 true 00:09:00.477 15:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:00.477 15:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:00.737 15:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:00.737 15:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:00.737 15:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 93233 00:09:01.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.308 Nvme0n1 : 3.00 17591.33 68.72 0.00 0.00 0.00 0.00 0.00 00:09:01.308 =================================================================================================================== 00:09:01.308 Total : 17591.33 68.72 0.00 0.00 0.00 0.00 0.00 00:09:01.308 00:09:02.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.251 Nvme0n1 : 4.00 17633.50 68.88 0.00 0.00 0.00 0.00 0.00 00:09:02.251 =================================================================================================================== 00:09:02.251 Total : 17633.50 68.88 0.00 0.00 0.00 0.00 0.00 00:09:02.251 00:09:03.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.636 Nvme0n1 : 5.00 17663.60 69.00 0.00 0.00 0.00 0.00 0.00 00:09:03.636 =================================================================================================================== 00:09:03.636 Total : 17663.60 69.00 0.00 0.00 0.00 0.00 0.00 00:09:03.636 00:09:04.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.580 Nvme0n1 : 6.00 17689.00 69.10 0.00 0.00 0.00 0.00 0.00 00:09:04.580 =================================================================================================================== 00:09:04.580 Total : 17689.00 69.10 0.00 0.00 0.00 0.00 0.00 00:09:04.580 00:09:05.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.521 Nvme0n1 : 7.00 17709.43 69.18 0.00 0.00 0.00 0.00 0.00 00:09:05.521 =================================================================================================================== 00:09:05.521 Total : 17709.43 69.18 0.00 0.00 0.00 0.00 0.00 00:09:05.521 00:09:06.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.464 Nvme0n1 : 8.00 17724.75 69.24 0.00 0.00 0.00 0.00 0.00 00:09:06.464 =================================================================================================================== 00:09:06.464 Total : 17724.75 69.24 0.00 0.00 0.00 0.00 0.00 00:09:06.464 00:09:07.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.408 Nvme0n1 : 9.00 17737.56 69.29 0.00 0.00 0.00 0.00 0.00 00:09:07.408 =================================================================================================================== 00:09:07.408 Total : 17737.56 69.29 0.00 0.00 0.00 0.00 0.00 00:09:07.408 00:09:08.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.350 Nvme0n1 : 10.00 17747.00 69.32 0.00 0.00 0.00 0.00 0.00 00:09:08.350 =================================================================================================================== 00:09:08.350 Total : 17747.00 69.32 0.00 0.00 0.00 0.00 0.00 00:09:08.350 00:09:08.350 00:09:08.350 Latency(us) 00:09:08.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.350 Nvme0n1 : 10.01 17746.95 69.32 0.00 0.00 7207.91 3986.77 15400.96 00:09:08.350 =================================================================================================================== 00:09:08.350 Total : 17746.95 69.32 0.00 0.00 7207.91 3986.77 15400.96 00:09:08.350 0 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 92902 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 92902 ']' 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 92902 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92902 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92902' 00:09:08.350 killing process with pid 92902 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 92902 00:09:08.350 Received shutdown signal, test time was about 10.000000 seconds 00:09:08.350 00:09:08.350 Latency(us) 00:09:08.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.350 =================================================================================================================== 00:09:08.350 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:08.350 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 92902 00:09:08.610 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:08.610 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.871 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:08.871 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 89411 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 89411 00:09:09.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 89411 Killed "${NVMF_APP[@]}" "$@" 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=95378 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 95378 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 95378 ']' 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.133 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.133 [2024-07-25 15:05:01.245999] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:09.133 [2024-07-25 15:05:01.246057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.133 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.133 [2024-07-25 15:05:01.312522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.394 [2024-07-25 15:05:01.379890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.395 [2024-07-25 15:05:01.379929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.395 [2024-07-25 15:05:01.379937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.395 [2024-07-25 15:05:01.379943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.395 [2024-07-25 15:05:01.379953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.395 [2024-07-25 15:05:01.379972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.965 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.965 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:09.965 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.966 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.966 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.966 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.966 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.225 [2024-07-25 15:05:02.184871] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:10.225 [2024-07-25 15:05:02.184958] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:10.225 [2024-07-25 15:05:02.184987] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:10.225 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:10.225 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a7999b18-8ce0-43b6-8f5d-12a8726dda0b 00:09:10.226 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=a7999b18-8ce0-43b6-8f5d-12a8726dda0b 00:09:10.226 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.226 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:10.226 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.226 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.226 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:10.226 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a7999b18-8ce0-43b6-8f5d-12a8726dda0b -t 2000 00:09:10.487 [ 00:09:10.487 { 00:09:10.487 "name": "a7999b18-8ce0-43b6-8f5d-12a8726dda0b", 00:09:10.487 "aliases": [ 00:09:10.487 "lvs/lvol" 00:09:10.487 ], 00:09:10.487 "product_name": "Logical Volume", 00:09:10.487 "block_size": 4096, 00:09:10.487 "num_blocks": 38912, 00:09:10.487 "uuid": "a7999b18-8ce0-43b6-8f5d-12a8726dda0b", 00:09:10.487 "assigned_rate_limits": { 00:09:10.487 "rw_ios_per_sec": 0, 00:09:10.487 "rw_mbytes_per_sec": 0, 00:09:10.487 "r_mbytes_per_sec": 0, 00:09:10.487 "w_mbytes_per_sec": 0 00:09:10.487 }, 00:09:10.487 "claimed": false, 00:09:10.487 "zoned": false, 00:09:10.487 "supported_io_types": { 00:09:10.487 "read": true, 00:09:10.487 "write": true, 00:09:10.487 "unmap": true, 00:09:10.487 "flush": false, 00:09:10.487 "reset": true, 00:09:10.487 "nvme_admin": false, 00:09:10.487 "nvme_io": false, 00:09:10.487 "nvme_io_md": false, 00:09:10.487 "write_zeroes": true, 00:09:10.487 "zcopy": false, 00:09:10.487 "get_zone_info": false, 00:09:10.487 "zone_management": false, 00:09:10.487 "zone_append": false, 00:09:10.487 "compare": false, 00:09:10.487 "compare_and_write": false, 00:09:10.487 "abort": false, 00:09:10.487 "seek_hole": true, 00:09:10.487 "seek_data": true, 00:09:10.487 "copy": false, 00:09:10.487 "nvme_iov_md": false 00:09:10.487 }, 00:09:10.487 "driver_specific": { 00:09:10.487 "lvol": { 00:09:10.487 "lvol_store_uuid": "e6e75e41-c881-4a71-8ef5-f87d27b925ba", 00:09:10.487 "base_bdev": "aio_bdev", 00:09:10.487 "thin_provision": false, 00:09:10.487 "num_allocated_clusters": 38, 00:09:10.487 "snapshot": false, 00:09:10.487 "clone": false, 00:09:10.487 "esnap_clone": false 00:09:10.487 } 00:09:10.487 } 00:09:10.487 } 00:09:10.487 ] 00:09:10.487 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:10.487 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:10.487 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:10.487 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:10.487 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:10.487 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:10.748 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:10.748 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.010 [2024-07-25 15:05:02.960838] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:11.010 request: 00:09:11.010 { 00:09:11.010 "uuid": "e6e75e41-c881-4a71-8ef5-f87d27b925ba", 00:09:11.010 "method": "bdev_lvol_get_lvstores", 00:09:11.010 "req_id": 1 00:09:11.010 } 00:09:11.010 Got JSON-RPC error response 00:09:11.010 response: 00:09:11.010 { 00:09:11.010 "code": -19, 00:09:11.010 "message": "No such device" 00:09:11.010 } 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.010 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:11.274 aio_bdev 00:09:11.275 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a7999b18-8ce0-43b6-8f5d-12a8726dda0b 00:09:11.275 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=a7999b18-8ce0-43b6-8f5d-12a8726dda0b 00:09:11.275 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.275 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:11.275 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.275 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.275 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:11.543 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a7999b18-8ce0-43b6-8f5d-12a8726dda0b -t 2000 00:09:11.543 [ 00:09:11.543 { 00:09:11.543 "name": "a7999b18-8ce0-43b6-8f5d-12a8726dda0b", 00:09:11.544 "aliases": [ 00:09:11.544 "lvs/lvol" 00:09:11.544 ], 00:09:11.544 "product_name": "Logical Volume", 00:09:11.544 "block_size": 4096, 00:09:11.544 "num_blocks": 38912, 00:09:11.544 "uuid": "a7999b18-8ce0-43b6-8f5d-12a8726dda0b", 00:09:11.544 "assigned_rate_limits": { 00:09:11.544 "rw_ios_per_sec": 0, 00:09:11.544 "rw_mbytes_per_sec": 0, 00:09:11.544 "r_mbytes_per_sec": 0, 00:09:11.544 "w_mbytes_per_sec": 0 00:09:11.544 }, 00:09:11.544 "claimed": false, 00:09:11.544 "zoned": false, 00:09:11.544 "supported_io_types": { 00:09:11.544 "read": true, 00:09:11.544 "write": true, 00:09:11.544 "unmap": true, 00:09:11.544 "flush": false, 00:09:11.544 "reset": true, 00:09:11.544 "nvme_admin": false, 00:09:11.544 "nvme_io": false, 00:09:11.544 "nvme_io_md": false, 00:09:11.544 "write_zeroes": true, 00:09:11.544 "zcopy": false, 00:09:11.544 "get_zone_info": false, 00:09:11.544 "zone_management": false, 00:09:11.544 "zone_append": false, 00:09:11.544 "compare": false, 00:09:11.544 "compare_and_write": false, 00:09:11.544 "abort": false, 00:09:11.544 "seek_hole": true, 00:09:11.544 "seek_data": true, 00:09:11.544 "copy": false, 00:09:11.544 "nvme_iov_md": false 00:09:11.544 }, 00:09:11.544 "driver_specific": { 00:09:11.544 "lvol": { 00:09:11.544 "lvol_store_uuid": "e6e75e41-c881-4a71-8ef5-f87d27b925ba", 00:09:11.544 "base_bdev": "aio_bdev", 00:09:11.544 "thin_provision": false, 00:09:11.544 "num_allocated_clusters": 38, 00:09:11.544 "snapshot": false, 00:09:11.544 "clone": false, 00:09:11.544 "esnap_clone": false 00:09:11.544 } 00:09:11.544 } 00:09:11.544 } 00:09:11.544 ] 00:09:11.544 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:11.544 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:11.544 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:11.805 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:11.805 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:11.805 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:12.066 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:12.066 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a7999b18-8ce0-43b6-8f5d-12a8726dda0b 00:09:12.066 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6e75e41-c881-4a71-8ef5-f87d27b925ba 00:09:12.327 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:12.327 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.588 00:09:12.588 real 0m17.166s 00:09:12.588 user 0m44.352s 00:09:12.588 sys 0m3.095s 00:09:12.588 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.589 ************************************ 00:09:12.589 END TEST lvs_grow_dirty 00:09:12.589 ************************************ 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:12.589 nvmf_trace.0 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.589 rmmod nvme_tcp 00:09:12.589 rmmod nvme_fabrics 00:09:12.589 rmmod nvme_keyring 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 95378 ']' 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 95378 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 95378 ']' 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 95378 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.589 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95378 00:09:12.850 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95378' 00:09:12.851 killing process with pid 95378 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 95378 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 95378 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.851 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.429 15:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:15.429 00:09:15.429 real 0m43.289s 00:09:15.429 user 1m5.289s 00:09:15.429 sys 0m10.157s 00:09:15.429 15:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.429 15:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:15.429 ************************************ 00:09:15.429 END TEST nvmf_lvs_grow 00:09:15.429 ************************************ 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.429 ************************************ 00:09:15.429 START TEST nvmf_bdev_io_wait 00:09:15.429 ************************************ 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:15.429 * Looking for test storage... 00:09:15.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.429 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.430 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:22.024 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:22.024 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:22.024 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:22.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.024 15:05:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.024 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.024 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.024 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:22.025 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:22.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:09:22.286 00:09:22.286 --- 10.0.0.2 ping statistics --- 00:09:22.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.286 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:09:22.286 00:09:22.286 --- 10.0.0.1 ping statistics --- 00:09:22.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.286 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=100325 00:09:22.286 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 100325 00:09:22.287 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:22.287 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 100325 ']' 00:09:22.287 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.287 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.287 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.287 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.287 15:05:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.287 [2024-07-25 15:05:14.378436] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:22.287 [2024-07-25 15:05:14.378495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.287 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.287 [2024-07-25 15:05:14.444669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.548 [2024-07-25 15:05:14.511525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.548 [2024-07-25 15:05:14.511579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.548 [2024-07-25 15:05:14.511587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.548 [2024-07-25 15:05:14.511594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.548 [2024-07-25 15:05:14.511599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.548 [2024-07-25 15:05:14.511738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.548 [2024-07-25 15:05:14.511850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.548 [2024-07-25 15:05:14.512007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.548 [2024-07-25 15:05:14.512008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.121 [2024-07-25 15:05:15.260191] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.121 Malloc0 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.121 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.383 [2024-07-25 15:05:15.335353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=100659 00:09:23.383 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=100662 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:23.384 { 00:09:23.384 "params": { 00:09:23.384 "name": "Nvme$subsystem", 00:09:23.384 "trtype": "$TEST_TRANSPORT", 00:09:23.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.384 "adrfam": "ipv4", 00:09:23.384 "trsvcid": "$NVMF_PORT", 00:09:23.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.384 "hdgst": ${hdgst:-false}, 00:09:23.384 "ddgst": ${ddgst:-false} 00:09:23.384 }, 00:09:23.384 "method": "bdev_nvme_attach_controller" 00:09:23.384 } 00:09:23.384 EOF 00:09:23.384 )") 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=100665 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=100668 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:23.384 { 00:09:23.384 "params": { 00:09:23.384 "name": "Nvme$subsystem", 00:09:23.384 "trtype": "$TEST_TRANSPORT", 00:09:23.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.384 "adrfam": "ipv4", 00:09:23.384 "trsvcid": "$NVMF_PORT", 00:09:23.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.384 "hdgst": ${hdgst:-false}, 00:09:23.384 "ddgst": ${ddgst:-false} 00:09:23.384 }, 00:09:23.384 "method": "bdev_nvme_attach_controller" 00:09:23.384 } 00:09:23.384 EOF 00:09:23.384 )") 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:23.384 { 00:09:23.384 "params": { 00:09:23.384 "name": "Nvme$subsystem", 00:09:23.384 "trtype": "$TEST_TRANSPORT", 00:09:23.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.384 "adrfam": "ipv4", 00:09:23.384 "trsvcid": "$NVMF_PORT", 00:09:23.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.384 "hdgst": ${hdgst:-false}, 00:09:23.384 "ddgst": ${ddgst:-false} 00:09:23.384 }, 00:09:23.384 "method": "bdev_nvme_attach_controller" 00:09:23.384 } 00:09:23.384 EOF 00:09:23.384 )") 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:23.384 { 00:09:23.384 "params": { 00:09:23.384 "name": "Nvme$subsystem", 00:09:23.384 "trtype": "$TEST_TRANSPORT", 00:09:23.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.384 "adrfam": "ipv4", 00:09:23.384 "trsvcid": "$NVMF_PORT", 00:09:23.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.384 "hdgst": ${hdgst:-false}, 00:09:23.384 "ddgst": ${ddgst:-false} 00:09:23.384 }, 00:09:23.384 "method": "bdev_nvme_attach_controller" 00:09:23.384 } 00:09:23.384 EOF 00:09:23.384 )") 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 100659 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:23.384 "params": { 00:09:23.384 "name": "Nvme1", 00:09:23.384 "trtype": "tcp", 00:09:23.384 "traddr": "10.0.0.2", 00:09:23.384 "adrfam": "ipv4", 00:09:23.384 "trsvcid": "4420", 00:09:23.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.384 "hdgst": false, 00:09:23.384 "ddgst": false 00:09:23.384 }, 00:09:23.384 "method": "bdev_nvme_attach_controller" 00:09:23.384 }' 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:23.384 "params": { 00:09:23.384 "name": "Nvme1", 00:09:23.384 "trtype": "tcp", 00:09:23.384 "traddr": "10.0.0.2", 00:09:23.384 "adrfam": "ipv4", 00:09:23.384 "trsvcid": "4420", 00:09:23.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.384 "hdgst": false, 00:09:23.384 "ddgst": false 00:09:23.384 }, 00:09:23.384 "method": "bdev_nvme_attach_controller" 00:09:23.384 }' 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:23.384 "params": { 00:09:23.384 "name": "Nvme1", 00:09:23.384 "trtype": "tcp", 00:09:23.384 "traddr": "10.0.0.2", 00:09:23.384 "adrfam": "ipv4", 00:09:23.384 "trsvcid": "4420", 00:09:23.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.384 "hdgst": false, 00:09:23.384 "ddgst": false 00:09:23.384 }, 00:09:23.384 "method": "bdev_nvme_attach_controller" 00:09:23.384 }' 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:23.384 15:05:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:23.384 "params": { 00:09:23.384 "name": "Nvme1", 00:09:23.384 "trtype": "tcp", 00:09:23.384 "traddr": "10.0.0.2", 00:09:23.384 "adrfam": "ipv4", 00:09:23.384 "trsvcid": "4420", 00:09:23.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.384 "hdgst": false, 00:09:23.384 "ddgst": false 00:09:23.384 }, 00:09:23.384 "method": "bdev_nvme_attach_controller" 00:09:23.384 }' 00:09:23.384 [2024-07-25 15:05:15.388276] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:23.384 [2024-07-25 15:05:15.388328] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:23.384 [2024-07-25 15:05:15.392132] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:23.384 [2024-07-25 15:05:15.392176] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:23.384 [2024-07-25 15:05:15.392432] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:23.384 [2024-07-25 15:05:15.392478] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:23.385 [2024-07-25 15:05:15.392476] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:23.385 [2024-07-25 15:05:15.392519] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:23.385 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.385 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.385 [2024-07-25 15:05:15.532186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.385 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.646 [2024-07-25 15:05:15.582833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:23.646 [2024-07-25 15:05:15.592992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.646 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.646 [2024-07-25 15:05:15.637188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.646 [2024-07-25 15:05:15.644749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:23.646 [2024-07-25 15:05:15.686901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.646 [2024-07-25 15:05:15.687020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:23.646 [2024-07-25 15:05:15.735918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:23.646 Running I/O for 1 seconds... 00:09:23.907 Running I/O for 1 seconds... 00:09:23.907 Running I/O for 1 seconds... 00:09:23.907 Running I/O for 1 seconds... 00:09:24.851 00:09:24.851 Latency(us) 00:09:24.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.851 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:24.851 Nvme1n1 : 1.01 12082.38 47.20 0.00 0.00 10538.98 2580.48 15947.09 00:09:24.851 =================================================================================================================== 00:09:24.851 Total : 12082.38 47.20 0.00 0.00 10538.98 2580.48 15947.09 00:09:24.851 00:09:24.851 Latency(us) 00:09:24.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.851 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:24.851 Nvme1n1 : 1.01 12828.23 50.11 0.00 0.00 9936.82 5488.64 22719.15 00:09:24.851 =================================================================================================================== 00:09:24.851 Total : 12828.23 50.11 0.00 0.00 9936.82 5488.64 22719.15 00:09:24.851 00:09:24.851 Latency(us) 00:09:24.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.851 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:24.851 Nvme1n1 : 1.00 11754.68 45.92 0.00 0.00 10867.48 3140.27 24903.68 00:09:24.851 =================================================================================================================== 00:09:24.851 Total : 11754.68 45.92 0.00 0.00 10867.48 3140.27 24903.68 00:09:24.851 00:09:24.851 Latency(us) 00:09:24.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.851 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:24.851 Nvme1n1 : 1.00 187823.20 733.68 0.00 0.00 679.10 269.65 757.76 00:09:24.851 =================================================================================================================== 00:09:24.851 Total : 187823.20 733.68 0.00 0.00 679.10 269.65 757.76 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 100662 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 100665 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 100668 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.113 rmmod nvme_tcp 00:09:25.113 rmmod nvme_fabrics 00:09:25.113 rmmod nvme_keyring 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 100325 ']' 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 100325 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 100325 ']' 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 100325 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.113 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100325 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100325' 00:09:25.374 killing process with pid 100325 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 100325 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 100325 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.374 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.920 00:09:27.920 real 0m12.442s 00:09:27.920 user 0m19.183s 00:09:27.920 sys 0m6.641s 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.920 ************************************ 00:09:27.920 END TEST nvmf_bdev_io_wait 00:09:27.920 ************************************ 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.920 ************************************ 00:09:27.920 START TEST nvmf_queue_depth 00:09:27.920 ************************************ 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:27.920 * Looking for test storage... 00:09:27.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.920 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.921 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:34.510 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:34.510 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:34.510 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:34.510 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:34.510 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:34.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:09:34.511 00:09:34.511 --- 10.0.0.2 ping statistics --- 00:09:34.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.511 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.518 ms 00:09:34.511 00:09:34.511 --- 10.0.0.1 ping statistics --- 00:09:34.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.511 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:34.511 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=105042 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 105042 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 105042 ']' 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.772 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.772 [2024-07-25 15:05:26.795111] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:34.772 [2024-07-25 15:05:26.795178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.772 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.772 [2024-07-25 15:05:26.882210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.034 [2024-07-25 15:05:26.975629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.034 [2024-07-25 15:05:26.975680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.034 [2024-07-25 15:05:26.975689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.034 [2024-07-25 15:05:26.975696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.034 [2024-07-25 15:05:26.975703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.034 [2024-07-25 15:05:26.975730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.608 [2024-07-25 15:05:27.629039] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.608 Malloc0 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.608 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.609 [2024-07-25 15:05:27.701310] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=105387 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 105387 /var/tmp/bdevperf.sock 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 105387 ']' 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:35.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.609 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.609 [2024-07-25 15:05:27.764560] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:35.609 [2024-07-25 15:05:27.764630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105387 ] 00:09:35.609 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.870 [2024-07-25 15:05:27.830910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.870 [2024-07-25 15:05:27.907453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.442 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.442 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:36.442 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:36.442 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.442 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.704 NVMe0n1 00:09:36.704 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.704 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:36.704 Running I/O for 10 seconds... 00:09:48.987 00:09:48.987 Latency(us) 00:09:48.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.987 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:48.987 Verification LBA range: start 0x0 length 0x4000 00:09:48.987 NVMe0n1 : 10.08 11485.40 44.86 0.00 0.00 88510.18 4560.21 68594.35 00:09:48.987 =================================================================================================================== 00:09:48.987 Total : 11485.40 44.86 0.00 0.00 88510.18 4560.21 68594.35 00:09:48.987 0 00:09:48.987 15:05:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 105387 00:09:48.987 15:05:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 105387 ']' 00:09:48.987 15:05:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 105387 00:09:48.987 15:05:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:48.987 15:05:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.987 15:05:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105387 00:09:48.987 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.987 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.987 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105387' 00:09:48.987 killing process with pid 105387 00:09:48.987 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 105387 00:09:48.987 Received shutdown signal, test time was about 10.000000 seconds 00:09:48.987 00:09:48.987 Latency(us) 00:09:48.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.987 =================================================================================================================== 00:09:48.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:48.987 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 105387 00:09:48.987 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:48.987 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:48.987 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.988 rmmod nvme_tcp 00:09:48.988 rmmod nvme_fabrics 00:09:48.988 rmmod nvme_keyring 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 105042 ']' 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 105042 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 105042 ']' 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 105042 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105042 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105042' 00:09:48.988 killing process with pid 105042 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 105042 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 105042 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.988 15:05:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.561 00:09:49.561 real 0m21.902s 00:09:49.561 user 0m25.672s 00:09:49.561 sys 0m6.451s 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.561 ************************************ 00:09:49.561 END TEST nvmf_queue_depth 00:09:49.561 ************************************ 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.561 ************************************ 00:09:49.561 START TEST nvmf_target_multipath 00:09:49.561 ************************************ 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.561 * Looking for test storage... 00:09:49.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.561 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.562 15:05:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:57.707 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:57.707 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:57.707 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:57.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:57.708 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:57.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:09:57.708 00:09:57.708 --- 10.0.0.2 ping statistics --- 00:09:57.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.708 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:09:57.708 00:09:57.708 --- 10.0.0.1 ping statistics --- 00:09:57.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.708 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:57.708 only one NIC for nvmf test 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.708 rmmod nvme_tcp 00:09:57.708 rmmod nvme_fabrics 00:09:57.708 rmmod nvme_keyring 00:09:57.708 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.708 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.095 00:09:59.095 real 0m9.532s 00:09:59.095 user 0m2.044s 00:09:59.095 sys 0m5.390s 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.095 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:59.095 ************************************ 00:09:59.095 END TEST nvmf_target_multipath 00:09:59.096 ************************************ 00:09:59.096 15:05:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:59.096 15:05:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.096 15:05:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.096 15:05:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.096 ************************************ 00:09:59.096 START TEST nvmf_zcopy 00:09:59.096 ************************************ 00:09:59.096 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:59.096 * Looking for test storage... 00:09:59.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.358 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.521 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:07.522 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:07.522 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:07.522 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:07.522 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:07.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:10:07.522 00:10:07.522 --- 10.0.0.2 ping statistics --- 00:10:07.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.522 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:10:07.522 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:10:07.522 00:10:07.523 --- 10.0.0.1 ping statistics --- 00:10:07.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.523 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=115944 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 115944 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 115944 ']' 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.523 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.523 [2024-07-25 15:05:58.746148] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:07.523 [2024-07-25 15:05:58.746223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.523 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.523 [2024-07-25 15:05:58.834386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.523 [2024-07-25 15:05:58.928576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.523 [2024-07-25 15:05:58.928632] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.523 [2024-07-25 15:05:58.928645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.523 [2024-07-25 15:05:58.928652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.523 [2024-07-25 15:05:58.928658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.523 [2024-07-25 15:05:58.928685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.523 [2024-07-25 15:05:59.588720] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.523 [2024-07-25 15:05:59.604917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.523 malloc0 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:07.523 { 00:10:07.523 "params": { 00:10:07.523 "name": "Nvme$subsystem", 00:10:07.523 "trtype": "$TEST_TRANSPORT", 00:10:07.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.523 "adrfam": "ipv4", 00:10:07.523 "trsvcid": "$NVMF_PORT", 00:10:07.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.523 "hdgst": ${hdgst:-false}, 00:10:07.523 "ddgst": ${ddgst:-false} 00:10:07.523 }, 00:10:07.523 "method": "bdev_nvme_attach_controller" 00:10:07.523 } 00:10:07.523 EOF 00:10:07.523 )") 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:07.523 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:07.523 "params": { 00:10:07.523 "name": "Nvme1", 00:10:07.523 "trtype": "tcp", 00:10:07.523 "traddr": "10.0.0.2", 00:10:07.523 "adrfam": "ipv4", 00:10:07.523 "trsvcid": "4420", 00:10:07.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.523 "hdgst": false, 00:10:07.523 "ddgst": false 00:10:07.523 }, 00:10:07.523 "method": "bdev_nvme_attach_controller" 00:10:07.523 }' 00:10:07.784 [2024-07-25 15:05:59.711685] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:07.784 [2024-07-25 15:05:59.711752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116090 ] 00:10:07.784 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.784 [2024-07-25 15:05:59.777300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.784 [2024-07-25 15:05:59.850515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.044 Running I/O for 10 seconds... 00:10:18.057 00:10:18.057 Latency(us) 00:10:18.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.057 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:18.057 Verification LBA range: start 0x0 length 0x1000 00:10:18.057 Nvme1n1 : 10.05 9285.80 72.55 0.00 0.00 13681.29 2839.89 44564.48 00:10:18.057 =================================================================================================================== 00:10:18.057 Total : 9285.80 72.55 0.00 0.00 13681.29 2839.89 44564.48 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=118442 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:18.057 { 00:10:18.057 "params": { 00:10:18.057 "name": "Nvme$subsystem", 00:10:18.057 "trtype": "$TEST_TRANSPORT", 00:10:18.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.057 "adrfam": "ipv4", 00:10:18.057 "trsvcid": "$NVMF_PORT", 00:10:18.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.057 "hdgst": ${hdgst:-false}, 00:10:18.057 "ddgst": ${ddgst:-false} 00:10:18.057 }, 00:10:18.057 "method": "bdev_nvme_attach_controller" 00:10:18.057 } 00:10:18.057 EOF 00:10:18.057 )") 00:10:18.057 [2024-07-25 15:06:10.212840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.057 [2024-07-25 15:06:10.212870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:18.057 [2024-07-25 15:06:10.220827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.057 [2024-07-25 15:06:10.220836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:18.057 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:18.057 "params": { 00:10:18.057 "name": "Nvme1", 00:10:18.057 "trtype": "tcp", 00:10:18.057 "traddr": "10.0.0.2", 00:10:18.057 "adrfam": "ipv4", 00:10:18.057 "trsvcid": "4420", 00:10:18.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.057 "hdgst": false, 00:10:18.057 "ddgst": false 00:10:18.057 }, 00:10:18.057 "method": "bdev_nvme_attach_controller" 00:10:18.057 }' 00:10:18.057 [2024-07-25 15:06:10.228845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.057 [2024-07-25 15:06:10.228853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.057 [2024-07-25 15:06:10.236866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.057 [2024-07-25 15:06:10.236874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.057 [2024-07-25 15:06:10.244885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.057 [2024-07-25 15:06:10.244893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.252905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.252913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.255836] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:18.349 [2024-07-25 15:06:10.255881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118442 ] 00:10:18.349 [2024-07-25 15:06:10.260925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.260932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.268946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.268953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.276967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.276974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.349 [2024-07-25 15:06:10.284987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.284994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.293006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.293014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.301027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.301034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.309047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.309055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.313702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.349 [2024-07-25 15:06:10.317068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.317076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.325088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.325096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.333107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.333114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.341129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.341137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.349 [2024-07-25 15:06:10.349149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.349 [2024-07-25 15:06:10.349161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.357169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.357177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.365189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.365196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.373213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.373220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.377921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.350 [2024-07-25 15:06:10.381234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.381241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.389254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.389263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.397276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.397289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.405294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.405303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.413311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.413320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.421331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.421339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.429353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.429361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.437371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.437379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.445392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.445400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.453425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.453438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.461463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.461472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.469479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.469488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.477501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.477511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.485521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.485529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.493542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.493550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.501563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.501570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.509582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.509590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.517609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.517617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.350 [2024-07-25 15:06:10.525625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.350 [2024-07-25 15:06:10.525633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.611 [2024-07-25 15:06:10.533645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.611 [2024-07-25 15:06:10.533655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.611 [2024-07-25 15:06:10.541665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.611 [2024-07-25 15:06:10.541671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.611 [2024-07-25 15:06:10.549687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.611 [2024-07-25 15:06:10.549694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.611 [2024-07-25 15:06:10.557709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.611 [2024-07-25 15:06:10.557716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.611 [2024-07-25 15:06:10.565731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.611 [2024-07-25 15:06:10.565738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.611 [2024-07-25 15:06:10.573752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.611 [2024-07-25 15:06:10.573760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.611 [2024-07-25 15:06:10.581773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.611 [2024-07-25 15:06:10.581781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.611 [2024-07-25 15:06:10.589794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.611 [2024-07-25 15:06:10.589801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.597815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.597822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.605836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.605843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.613855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.613862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.621877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.621885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.629898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.629906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.637920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.637926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.645941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.645948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.653960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.653967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.661981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.661989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.670002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.670009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.678046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.678059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 Running I/O for 5 seconds... 00:10:18.612 [2024-07-25 15:06:10.686046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.686054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.705126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.705142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.716028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.716045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.724898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.724914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.733433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.733447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.742022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.742036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.750875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.750890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.759878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.759892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.768778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.768793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.778247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.778265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.786159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.786173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.612 [2024-07-25 15:06:10.795109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.612 [2024-07-25 15:06:10.795124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.804341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.804356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.812968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.812983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.821642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.821657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.830741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.830755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.839832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.839846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.848707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.848721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.857185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.857205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.865996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.866010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.874702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.874716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.883311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.883325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.891666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.891680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.900446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.900460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.908510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.908524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.916922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.916936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.925926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.925941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.934874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.934888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.943364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.943383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.951780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.951794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.960875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.960889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.969442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.969456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.978207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.978221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.986990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.987004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:10.995723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:10.995737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:11.004994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:11.005008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:11.012890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:11.012904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:11.022118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:11.022132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:11.030814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:11.030828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:11.039832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:11.039846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:11.048824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:11.048838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.874 [2024-07-25 15:06:11.057737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.874 [2024-07-25 15:06:11.057752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.065644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.065658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.074880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.074894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.083569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.083583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.092158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.092172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.100521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.100535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.109250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.109267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.118443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.118457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.127184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.127198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.136220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.136234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.145385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.145399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.154005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.154018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.163074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.163087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.171648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.171662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.179842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.179856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.188868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.188882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.197261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.197275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.205980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.205994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.214883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.214897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.223402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.223416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.232141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.232155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.240704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.240719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.249353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.249368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.258246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.258260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.267031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.267046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.276090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.276107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.284871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.284885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.293402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.293416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.301782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.301796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.310922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.310936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.136 [2024-07-25 15:06:11.318702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.136 [2024-07-25 15:06:11.318716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.327908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.327923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.336530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.336544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.345075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.345089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.353849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.353863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.363076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.363090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.371851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.371865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.380183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.380197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.388912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.388926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.397197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.397215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.405916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.405930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.414817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.414831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.422870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.422885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.431696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.431710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.440613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.440627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.449436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.449449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.458086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.458101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.466793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.466807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.476001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.476016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.484361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.484375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.492685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.492699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.501764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.501778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.510922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.510935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.520006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.520020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.528480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.528494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.537032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.537047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.545222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.545236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.553922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.553936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.562526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.562540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.571013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.571027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.579814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.579829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-25 15:06:11.588235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-25 15:06:11.588249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.597058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.597073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.605976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.605990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.614787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.614801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.623743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.623757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.632824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.632838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.641799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.641813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.650575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.650589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.659107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.659122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.668023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.668037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.677230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.677245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.685479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.685494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.694510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.694524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.703235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.703249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.712416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.712430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.720747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.720761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.729604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.729618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.737580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.737595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.746513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.746527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.754968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.754982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.763305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.763319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.771757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.771771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.780308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.780322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.788992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.789007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.797727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.797741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.805709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.805724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.814458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.814472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.823627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.823642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.832474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.832488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.841314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.841329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.660 [2024-07-25 15:06:11.850570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.660 [2024-07-25 15:06:11.850584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.858771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.858785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.867907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.867922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.876514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.876528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.885714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.885729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.894848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.894863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.903560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.903574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.912372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.912385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.920671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.920686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.929697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.929715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.938377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.938391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.947250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.947264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.955847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.955861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.964596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.964610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.973484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.973498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.981847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.981862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.991054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.991069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:11.999715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:11.999730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.008406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.008420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.017512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.017527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.026093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.026107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.035256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.035270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.044348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.044361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.052920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.052934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.061585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.061599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.070434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.070448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.079560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.079575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.088134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.088148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.096640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.096657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.922 [2024-07-25 15:06:12.105037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.922 [2024-07-25 15:06:12.105052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.113710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.113724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.122011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.122025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.130811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.130826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.139220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.139234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.147764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.147778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.156534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.156549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.164800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.164815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.173436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.173449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.181933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.181947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.183 [2024-07-25 15:06:12.191189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.183 [2024-07-25 15:06:12.191209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.199781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.199795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.208675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.208689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.217482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.217496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.226216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.226229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.235439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.235453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.243167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.243181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.252414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.252428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.261199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.261221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.270257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.270271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.279085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.279099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.287986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.288000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.296529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.296543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.305224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.305238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.314500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.314514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.323118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.323132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.332155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.332170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.340672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.340687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.349152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.349166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.357921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.357936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.184 [2024-07-25 15:06:12.366800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.184 [2024-07-25 15:06:12.366815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.375450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.375465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.383841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.383855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.392752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.392766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.401729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.401744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.410907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.410921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.420087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.420101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.428341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.428359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.437044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.437059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.445975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.445989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.455214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.455228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.463767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.463781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.472671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.472686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.481827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.481842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.490826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.490840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.499771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.499785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.508502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.508516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.517088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.517102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.525659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.525674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.534552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.534566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.542831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.542846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.551435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.551450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.559888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.559904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.568347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.568361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.577129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.577143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.585679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.585693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.594041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.594059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.602214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.602228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.610785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.610799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.619840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.619854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.446 [2024-07-25 15:06:12.627793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.446 [2024-07-25 15:06:12.627807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-07-25 15:06:12.637140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.637154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.645734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.645749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.654448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.654463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.663168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.663182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.672187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.672205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.681346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.681360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.690133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.690146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.699002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.699016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.707529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.707543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.716729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.716744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.725417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.725431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.734105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.734120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.742609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.742623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.751186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.751204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.759874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.759889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.768370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.768384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.777152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.777166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.786245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.786259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.795453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.795468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.804124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.804138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.812326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.812339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.820776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.820790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.829031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.829045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.837258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.837273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.846205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.846220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.855488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.855502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.864362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.864378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.873014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.873029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.881497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.881512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.708 [2024-07-25 15:06:12.890579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.708 [2024-07-25 15:06:12.890593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.898904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.898919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.907777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.907792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.916806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.916820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.925254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.925269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.933616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.933631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.942179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.942193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.950506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.950521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.959359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.959381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.968435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.968450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.976351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.976365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.969 [2024-07-25 15:06:12.985220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.969 [2024-07-25 15:06:12.985235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:12.994105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:12.994120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.002777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.002791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.011684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.011699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.020658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.020673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.029128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.029143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.037909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.037924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.046049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.046064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.054412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.054426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.062877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.062891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.071377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.071392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.080738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.080753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.088513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.088528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.097896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.097910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.106968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.106982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.116080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.116095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.125328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.125343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.134089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.134104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.142658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.142673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.970 [2024-07-25 15:06:13.151470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.970 [2024-07-25 15:06:13.151485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.160542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.160558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.169558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.169573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.178086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.178100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.187097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.187111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.195399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.195414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.203825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.203839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.212123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.212137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.220486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.220501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.229019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.229034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.237577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.237592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.246113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.246132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.254960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.254975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.264097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.264112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.272769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.272783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.281386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.281400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.289926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.289942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.298410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.298425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.306826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.306841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.315637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.315651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.323877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.323891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.332619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.332633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.340907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.340921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.349571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.349585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.358022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.358037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.366949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.366963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.375759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.375773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.384703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.384717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.393024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.393039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.401689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.401704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.410509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.410527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.232 [2024-07-25 15:06:13.419222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.232 [2024-07-25 15:06:13.419237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.493 [2024-07-25 15:06:13.427946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.493 [2024-07-25 15:06:13.427961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.493 [2024-07-25 15:06:13.436418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.493 [2024-07-25 15:06:13.436433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.493 [2024-07-25 15:06:13.445034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.493 [2024-07-25 15:06:13.445048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.493 [2024-07-25 15:06:13.453711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.493 [2024-07-25 15:06:13.453726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.462441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.462456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.470828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.470843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.479716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.479730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.487885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.487900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.496135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.496150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.504889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.504904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.513662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.513676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.522550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.522565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.531554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.531569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.540689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.540703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.549154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.549168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.558293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.558307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.566866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.566881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.575722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.575741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.584048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.584063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.592822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.592837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.601616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.601631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.610738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.610752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.619510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.619524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.628543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.628558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.637521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.637535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.646160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.646174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.655348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.655362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.663998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.664012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.672472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.672487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.494 [2024-07-25 15:06:13.681208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.494 [2024-07-25 15:06:13.681223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.689780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.689796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.698395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.698410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.707264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.707279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.715992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.716007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.724632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.724647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.733456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.733471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.742649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.742667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.750796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.750811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.759528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.759543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.768151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.768165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.777044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.777058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.785676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.785690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.794043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.794058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.802543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.802557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.810962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.810976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.819653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.819668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.828876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.828891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.838085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.838100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.846024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.846039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.855223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.855238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.863829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.863844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.872260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.872275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.880974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.880990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.889816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.889831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.898969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.898984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.908107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.908126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.916027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.916043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.924883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.924899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.933941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.933956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.756 [2024-07-25 15:06:13.942746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.756 [2024-07-25 15:06:13.942761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.018 [2024-07-25 15:06:13.951782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.018 [2024-07-25 15:06:13.951797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.018 [2024-07-25 15:06:13.960501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:13.960516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:13.969359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:13.969374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:13.977813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:13.977828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:13.987312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:13.987327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:13.996021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:13.996036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.004608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.004622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.013690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.013706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.021869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.021884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.030982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.030997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.039498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.039513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.048760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.048775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.057397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.057413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.065850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.065865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.073794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.073808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.082880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.082895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.091317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.091332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.099912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.099926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.109024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.109039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.122368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.122384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.130098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.130114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.139335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.139351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.147954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.147969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.157096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.157112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.165680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.165696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.175068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.175083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.183999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.184014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.192981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.192995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.019 [2024-07-25 15:06:14.201875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.019 [2024-07-25 15:06:14.201890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.211056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.211071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.220325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.220340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.229605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.229620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.238733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.238747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.247251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.247265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.256312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.256327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.264726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.264741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.272707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.272722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.281619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.281634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.290181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.290196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.299237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.299251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.308323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.308338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.280 [2024-07-25 15:06:14.316677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.280 [2024-07-25 15:06:14.316692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.325344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.325359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.334321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.334336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.342964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.342979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.352199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.352219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.360066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.360081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.368867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.368882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.377061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.377077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.385878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.385893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.393646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.393661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.402450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.402464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.411211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.411226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.420019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.420034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.428482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.428496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.436947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.436961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.446029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.446043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.454795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.454809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.281 [2024-07-25 15:06:14.463128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.281 [2024-07-25 15:06:14.463142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.471471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.471486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.480070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.480084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.488640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.488654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.497272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.497286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.506405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.506419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.515156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.515171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.523931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.523945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.532582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.532597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.541233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.541248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.549999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.550013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.558888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.558903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.568046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.568064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.576476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.576491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.585158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.585173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.593307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.593321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.602317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.602331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.610648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.610663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.619199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.619217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.628180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.628195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.636906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.636921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.645554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.645568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.654137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.654151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.663043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.663058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.671441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.671456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.680589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.680604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.689005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.689019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.697585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.697600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.705773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.705786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.714621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.714635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.723306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.723320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.542 [2024-07-25 15:06:14.731799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.542 [2024-07-25 15:06:14.731817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.741056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.741071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.749113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.749128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.758283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.758298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.767018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.767033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.776336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.776351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.785150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.785165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.793411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.793426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.801985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.802000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.810674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.810689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.819085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.819099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.827314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.827329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.835871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.835886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.844263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.844278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.853333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.853347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.862579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.862594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.871341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.871355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.880593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.880607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.889128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.889143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.897644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.897662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.906227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.906241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.914822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.914836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.923499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.923514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.931819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.931834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.940826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.940840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.949819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.949834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.958315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.958329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.966450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.966464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.975406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.975420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.984125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.984140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.804 [2024-07-25 15:06:14.992991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.804 [2024-07-25 15:06:14.993005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.002233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.002248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.010520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.010534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.019452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.019467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.028243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.028257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.037058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.037073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.044928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.044942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.053828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.053842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.062363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.062381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.071075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.071090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.079606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.079621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.088421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.088436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.097632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.097646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.106672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.106686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.114484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.114498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.123660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.123674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.132773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.132787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.141402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.141417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.149956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.149970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.158502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.158517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.167691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.167706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.176182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.176196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.185165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.185179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.193831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.193845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.202297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.202312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.210223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.210237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.066 [2024-07-25 15:06:15.219380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.066 [2024-07-25 15:06:15.219394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.067 [2024-07-25 15:06:15.228002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.067 [2024-07-25 15:06:15.228019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.067 [2024-07-25 15:06:15.236872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.067 [2024-07-25 15:06:15.236887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.067 [2024-07-25 15:06:15.245650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.067 [2024-07-25 15:06:15.245665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.067 [2024-07-25 15:06:15.254219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.067 [2024-07-25 15:06:15.254234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.263001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.263016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.270912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.270926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.280157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.280171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.288733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.288747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.297506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.297521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.305821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.305836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.314520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.314534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.323326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.323341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.332350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.332365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.341139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.341155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.350475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.350489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.358877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.358892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.367605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.367620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.376654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.376668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.385856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.385871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.394367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.394382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.403524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.403539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.412037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.412052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.420589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.420604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.429134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.429150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.438032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.438048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.446796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.446811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.455449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.455464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.463919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.463934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.472817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.472832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.481750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.481764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.490098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.490113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.499281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.499296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.507989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.508005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.328 [2024-07-25 15:06:15.517167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.328 [2024-07-25 15:06:15.517182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.589 [2024-07-25 15:06:15.525983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.589 [2024-07-25 15:06:15.525999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.589 [2024-07-25 15:06:15.534368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.589 [2024-07-25 15:06:15.534383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.589 [2024-07-25 15:06:15.542738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.542752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.551504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.551519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.559530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.559545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.568327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.568342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.577101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.577116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.585984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.585999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.594614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.594629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.603871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.603885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.612392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.612406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.620991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.621006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.629053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.629068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.637859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.637874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.646453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.646468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.654750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.654765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.663903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.663917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.672463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.672478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.681155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.681169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.689448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.689462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.698331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.698346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.704048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.704062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 00:10:23.590 Latency(us) 00:10:23.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.590 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:23.590 Nvme1n1 : 5.01 18997.63 148.42 0.00 0.00 6731.20 2416.64 26651.31 00:10:23.590 =================================================================================================================== 00:10:23.590 Total : 18997.63 148.42 0.00 0.00 6731.20 2416.64 26651.31 00:10:23.590 [2024-07-25 15:06:15.712063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.712075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.720083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.720094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.728108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.728117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.736127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.736138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.744146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.744156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.752165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.752176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.760184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.760193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.768207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.768216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.590 [2024-07-25 15:06:15.776228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.590 [2024-07-25 15:06:15.776235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.851 [2024-07-25 15:06:15.784247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.851 [2024-07-25 15:06:15.784256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.851 [2024-07-25 15:06:15.792267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.851 [2024-07-25 15:06:15.792277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.852 [2024-07-25 15:06:15.800287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.852 [2024-07-25 15:06:15.800297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.852 [2024-07-25 15:06:15.808308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.852 [2024-07-25 15:06:15.808317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.852 [2024-07-25 15:06:15.816329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.852 [2024-07-25 15:06:15.816339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.852 [2024-07-25 15:06:15.824348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.852 [2024-07-25 15:06:15.824357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.852 [2024-07-25 15:06:15.832367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.852 [2024-07-25 15:06:15.832376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (118442) - No such process 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 118442 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.852 delay0 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.852 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:23.852 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.852 [2024-07-25 15:06:16.005444] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:30.440 Initializing NVMe Controllers 00:10:30.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:30.440 Initialization complete. Launching workers. 00:10:30.440 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 105 00:10:30.440 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 392, failed to submit 33 00:10:30.440 success 175, unsuccess 217, failed 0 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.440 rmmod nvme_tcp 00:10:30.440 rmmod nvme_fabrics 00:10:30.440 rmmod nvme_keyring 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 115944 ']' 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 115944 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 115944 ']' 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 115944 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.440 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115944 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115944' 00:10:30.441 killing process with pid 115944 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 115944 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 115944 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.441 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.352 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:32.352 00:10:32.352 real 0m33.323s 00:10:32.352 user 0m45.152s 00:10:32.352 sys 0m10.485s 00:10:32.352 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.352 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.352 ************************************ 00:10:32.352 END TEST nvmf_zcopy 00:10:32.352 ************************************ 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.614 ************************************ 00:10:32.614 START TEST nvmf_nmic 00:10:32.614 ************************************ 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:32.614 * Looking for test storage... 00:10:32.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.614 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.615 15:06:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:40.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:40.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:40.770 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.770 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:40.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:10:40.771 00:10:40.771 --- 10.0.0.2 ping statistics --- 00:10:40.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.771 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:10:40.771 00:10:40.771 --- 10.0.0.1 ping statistics --- 00:10:40.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.771 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=125338 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 125338 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 125338 ']' 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.771 [2024-07-25 15:06:31.988126] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:40.771 [2024-07-25 15:06:31.988190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.771 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.771 [2024-07-25 15:06:32.058993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.771 [2024-07-25 15:06:32.134964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.771 [2024-07-25 15:06:32.135003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.771 [2024-07-25 15:06:32.135011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.771 [2024-07-25 15:06:32.135017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.771 [2024-07-25 15:06:32.135023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.771 [2024-07-25 15:06:32.135162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.771 [2024-07-25 15:06:32.135299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.771 [2024-07-25 15:06:32.135358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.771 [2024-07-25 15:06:32.135359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 [2024-07-25 15:06:32.801117] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 Malloc0 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 [2024-07-25 15:06:32.844396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:40.771 test case1: single bdev can't be used in multiple subsystems 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:40.771 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 [2024-07-25 15:06:32.868307] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:40.772 [2024-07-25 15:06:32.868330] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:40.772 [2024-07-25 15:06:32.868338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.772 request: 00:10:40.772 { 00:10:40.772 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:40.772 "namespace": { 00:10:40.772 "bdev_name": "Malloc0", 00:10:40.772 "no_auto_visible": false 00:10:40.772 }, 00:10:40.772 "method": "nvmf_subsystem_add_ns", 00:10:40.772 "req_id": 1 00:10:40.772 } 00:10:40.772 Got JSON-RPC error response 00:10:40.772 response: 00:10:40.772 { 00:10:40.772 "code": -32602, 00:10:40.772 "message": "Invalid parameters" 00:10:40.772 } 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:40.772 Adding namespace failed - expected result. 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:40.772 test case2: host connect to nvmf target in multiple paths 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.772 [2024-07-25 15:06:32.880425] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.772 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.764 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:44.150 15:06:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.150 15:06:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:44.150 15:06:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.150 15:06:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:44.150 15:06:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:46.065 15:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:46.065 15:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:46.065 15:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.065 15:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:46.065 15:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.065 15:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:46.065 15:06:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:46.065 [global] 00:10:46.065 thread=1 00:10:46.065 invalidate=1 00:10:46.065 rw=write 00:10:46.065 time_based=1 00:10:46.065 runtime=1 00:10:46.065 ioengine=libaio 00:10:46.065 direct=1 00:10:46.065 bs=4096 00:10:46.065 iodepth=1 00:10:46.065 norandommap=0 00:10:46.065 numjobs=1 00:10:46.065 00:10:46.065 verify_dump=1 00:10:46.065 verify_backlog=512 00:10:46.065 verify_state_save=0 00:10:46.065 do_verify=1 00:10:46.065 verify=crc32c-intel 00:10:46.065 [job0] 00:10:46.065 filename=/dev/nvme0n1 00:10:46.065 Could not set queue depth (nvme0n1) 00:10:46.326 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.326 fio-3.35 00:10:46.326 Starting 1 thread 00:10:47.713 00:10:47.713 job0: (groupid=0, jobs=1): err= 0: pid=126670: Thu Jul 25 15:06:39 2024 00:10:47.713 read: IOPS=12, BW=51.9KiB/s (53.1kB/s)(52.0KiB/1002msec) 00:10:47.713 slat (nsec): min=24885, max=25518, avg=25127.77, stdev=184.55 00:10:47.713 clat (usec): min=41610, max=42094, avg=41949.51, stdev=108.55 00:10:47.713 lat (usec): min=41635, max=42119, avg=41974.64, stdev=108.54 00:10:47.713 clat percentiles (usec): 00:10:47.713 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:10:47.713 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:47.713 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:47.713 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:47.713 | 99.99th=[42206] 00:10:47.713 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:47.713 slat (usec): min=9, max=26434, avg=82.77, stdev=1166.89 00:10:47.713 clat (usec): min=566, max=957, avg=801.09, stdev=69.39 00:10:47.713 lat (usec): min=577, max=27306, avg=883.86, stdev=1172.28 00:10:47.713 clat percentiles (usec): 00:10:47.713 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 717], 20.00th=[ 750], 00:10:47.713 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 832], 00:10:47.713 | 70.00th=[ 848], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 906], 00:10:47.713 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 955], 99.95th=[ 955], 00:10:47.713 | 99.99th=[ 955] 00:10:47.713 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.713 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.713 lat (usec) : 750=22.10%, 1000=75.43% 00:10:47.713 lat (msec) : 50=2.48% 00:10:47.713 cpu : usr=0.50%, sys=1.80%, ctx=528, majf=0, minf=1 00:10:47.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.713 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.713 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.713 00:10:47.713 Run status group 0 (all jobs): 00:10:47.713 READ: bw=51.9KiB/s (53.1kB/s), 51.9KiB/s-51.9KiB/s (53.1kB/s-53.1kB/s), io=52.0KiB (53.2kB), run=1002-1002msec 00:10:47.713 WRITE: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2048KiB (2097kB), run=1002-1002msec 00:10:47.713 00:10:47.713 Disk stats (read/write): 00:10:47.713 nvme0n1: ios=35/512, merge=0/0, ticks=1387/389, in_queue=1776, util=98.80% 00:10:47.713 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:47.713 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:47.714 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:47.714 rmmod nvme_tcp 00:10:47.714 rmmod nvme_fabrics 00:10:47.714 rmmod nvme_keyring 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 125338 ']' 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 125338 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 125338 ']' 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 125338 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125338 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125338' 00:10:47.975 killing process with pid 125338 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 125338 00:10:47.975 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 125338 00:10:47.975 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:47.975 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:47.975 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:47.975 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.975 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:47.975 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.975 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.975 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.521 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:50.521 00:10:50.522 real 0m17.608s 00:10:50.522 user 0m48.951s 00:10:50.522 sys 0m6.145s 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.522 ************************************ 00:10:50.522 END TEST nvmf_nmic 00:10:50.522 ************************************ 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.522 ************************************ 00:10:50.522 START TEST nvmf_fio_target 00:10:50.522 ************************************ 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:50.522 * Looking for test storage... 00:10:50.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:50.522 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:57.115 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:57.115 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:57.115 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:57.115 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.115 15:06:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.115 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.115 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.115 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:57.115 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.115 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:57.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:10:57.116 00:10:57.116 --- 10.0.0.2 ping statistics --- 00:10:57.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.116 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:10:57.116 00:10:57.116 --- 10.0.0.1 ping statistics --- 00:10:57.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.116 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=131154 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 131154 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 131154 ']' 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.116 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.378 [2024-07-25 15:06:49.329134] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:57.378 [2024-07-25 15:06:49.329198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.378 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.378 [2024-07-25 15:06:49.405081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.378 [2024-07-25 15:06:49.480357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.378 [2024-07-25 15:06:49.480401] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.378 [2024-07-25 15:06:49.480409] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.378 [2024-07-25 15:06:49.480416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.378 [2024-07-25 15:06:49.480422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.378 [2024-07-25 15:06:49.480567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.378 [2024-07-25 15:06:49.480682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.378 [2024-07-25 15:06:49.480836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.378 [2024-07-25 15:06:49.480837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.950 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.950 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:57.950 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:57.951 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.951 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.212 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.212 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.212 [2024-07-25 15:06:50.300716] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.212 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.473 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:58.473 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.735 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:58.735 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.735 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:58.735 15:06:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.995 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:58.995 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:59.257 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.257 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:59.257 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.518 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:59.518 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.779 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:59.779 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:59.779 15:06:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.040 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.040 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.301 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.301 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.302 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.563 [2024-07-25 15:06:52.566209] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.563 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:00.824 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:00.825 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.737 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:02.737 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:02.737 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.737 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:02.737 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:02.737 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:04.724 15:06:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:04.724 15:06:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:04.724 15:06:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.724 15:06:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:04.724 15:06:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.724 15:06:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:04.724 15:06:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:04.724 [global] 00:11:04.724 thread=1 00:11:04.724 invalidate=1 00:11:04.724 rw=write 00:11:04.724 time_based=1 00:11:04.724 runtime=1 00:11:04.724 ioengine=libaio 00:11:04.724 direct=1 00:11:04.724 bs=4096 00:11:04.724 iodepth=1 00:11:04.724 norandommap=0 00:11:04.724 numjobs=1 00:11:04.724 00:11:04.724 verify_dump=1 00:11:04.724 verify_backlog=512 00:11:04.724 verify_state_save=0 00:11:04.724 do_verify=1 00:11:04.724 verify=crc32c-intel 00:11:04.724 [job0] 00:11:04.724 filename=/dev/nvme0n1 00:11:04.724 [job1] 00:11:04.724 filename=/dev/nvme0n2 00:11:04.724 [job2] 00:11:04.724 filename=/dev/nvme0n3 00:11:04.724 [job3] 00:11:04.724 filename=/dev/nvme0n4 00:11:04.724 Could not set queue depth (nvme0n1) 00:11:04.724 Could not set queue depth (nvme0n2) 00:11:04.724 Could not set queue depth (nvme0n3) 00:11:04.724 Could not set queue depth (nvme0n4) 00:11:04.984 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.984 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.984 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.984 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.984 fio-3.35 00:11:04.984 Starting 4 threads 00:11:06.400 00:11:06.400 job0: (groupid=0, jobs=1): err= 0: pid=132824: Thu Jul 25 15:06:58 2024 00:11:06.400 read: IOPS=534, BW=2138KiB/s (2189kB/s)(2140KiB/1001msec) 00:11:06.401 slat (nsec): min=6569, max=62230, avg=25007.08, stdev=7295.23 00:11:06.401 clat (usec): min=510, max=1032, avg=780.81, stdev=92.37 00:11:06.401 lat (usec): min=519, max=1058, avg=805.82, stdev=94.74 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 545], 5.00th=[ 627], 10.00th=[ 652], 20.00th=[ 701], 00:11:06.401 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 816], 00:11:06.401 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 898], 95.00th=[ 922], 00:11:06.401 | 99.00th=[ 996], 99.50th=[ 1004], 99.90th=[ 1037], 99.95th=[ 1037], 00:11:06.401 | 99.99th=[ 1037] 00:11:06.401 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:06.401 slat (nsec): min=9081, max=71982, avg=30930.43, stdev=10173.08 00:11:06.401 clat (usec): min=195, max=769, avg=513.77, stdev=109.75 00:11:06.401 lat (usec): min=207, max=804, avg=544.70, stdev=113.53 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 269], 5.00th=[ 326], 10.00th=[ 359], 20.00th=[ 416], 00:11:06.401 | 30.00th=[ 449], 40.00th=[ 490], 50.00th=[ 523], 60.00th=[ 545], 00:11:06.401 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 652], 95.00th=[ 685], 00:11:06.401 | 99.00th=[ 734], 99.50th=[ 742], 99.90th=[ 758], 99.95th=[ 766], 00:11:06.401 | 99.99th=[ 766] 00:11:06.401 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.401 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.401 lat (usec) : 250=0.38%, 500=27.77%, 750=50.10%, 1000=21.49% 00:11:06.401 lat (msec) : 2=0.26% 00:11:06.401 cpu : usr=2.60%, sys=6.40%, ctx=1563, majf=0, minf=1 00:11:06.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 issued rwts: total=535,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.401 job1: (groupid=0, jobs=1): err= 0: pid=132825: Thu Jul 25 15:06:58 2024 00:11:06.401 read: IOPS=14, BW=58.7KiB/s (60.1kB/s)(60.0KiB/1022msec) 00:11:06.401 slat (nsec): min=25190, max=43068, avg=26702.73, stdev=4530.05 00:11:06.401 clat (usec): min=1357, max=42946, avg=34100.74, stdev=16911.04 00:11:06.401 lat (usec): min=1383, max=42972, avg=34127.45, stdev=16911.79 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 1352], 5.00th=[ 1352], 10.00th=[ 1467], 20.00th=[ 1483], 00:11:06.401 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:06.401 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:11:06.401 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:06.401 | 99.99th=[42730] 00:11:06.401 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:11:06.401 slat (nsec): min=9918, max=52839, avg=33883.52, stdev=3651.31 00:11:06.401 clat (usec): min=536, max=1470, avg=953.57, stdev=116.09 00:11:06.401 lat (usec): min=569, max=1503, avg=987.45, stdev=116.12 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 660], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 857], 00:11:06.401 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 955], 60.00th=[ 979], 00:11:06.401 | 70.00th=[ 1004], 80.00th=[ 1037], 90.00th=[ 1106], 95.00th=[ 1139], 00:11:06.401 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1467], 99.95th=[ 1467], 00:11:06.401 | 99.99th=[ 1467] 00:11:06.401 bw ( KiB/s): min= 80, max= 4016, per=20.60%, avg=2048.00, stdev=2783.17, samples=2 00:11:06.401 iops : min= 20, max= 1004, avg=512.00, stdev=695.79, samples=2 00:11:06.401 lat (usec) : 750=2.66%, 1000=64.33% 00:11:06.401 lat (msec) : 2=30.74%, 50=2.28% 00:11:06.401 cpu : usr=0.69%, sys=1.86%, ctx=528, majf=0, minf=1 00:11:06.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.401 job2: (groupid=0, jobs=1): err= 0: pid=132826: Thu Jul 25 15:06:58 2024 00:11:06.401 read: IOPS=11, BW=46.6KiB/s (47.7kB/s)(48.0KiB/1030msec) 00:11:06.401 slat (nsec): min=25644, max=31318, avg=26379.83, stdev=1570.66 00:11:06.401 clat (usec): min=41936, max=42988, avg=42313.94, stdev=444.21 00:11:06.401 lat (usec): min=41962, max=43014, avg=42340.32, stdev=444.01 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:06.401 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:06.401 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:11:06.401 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:06.401 | 99.99th=[42730] 00:11:06.401 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:06.401 slat (usec): min=12, max=2993, avg=41.85, stdev=130.75 00:11:06.401 clat (usec): min=612, max=1404, avg=969.13, stdev=118.35 00:11:06.401 lat (usec): min=646, max=3962, avg=1010.98, stdev=176.40 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 717], 5.00th=[ 791], 10.00th=[ 824], 20.00th=[ 873], 00:11:06.401 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 988], 00:11:06.401 | 70.00th=[ 1012], 80.00th=[ 1045], 90.00th=[ 1106], 95.00th=[ 1188], 00:11:06.401 | 99.00th=[ 1336], 99.50th=[ 1385], 99.90th=[ 1401], 99.95th=[ 1401], 00:11:06.401 | 99.99th=[ 1401] 00:11:06.401 bw ( KiB/s): min= 160, max= 3936, per=20.60%, avg=2048.00, stdev=2670.04, samples=2 00:11:06.401 iops : min= 40, max= 984, avg=512.00, stdev=667.51, samples=2 00:11:06.401 lat (usec) : 750=2.29%, 1000=61.45% 00:11:06.401 lat (msec) : 2=33.97%, 50=2.29% 00:11:06.401 cpu : usr=0.78%, sys=1.75%, ctx=526, majf=0, minf=1 00:11:06.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.401 job3: (groupid=0, jobs=1): err= 0: pid=132827: Thu Jul 25 15:06:58 2024 00:11:06.401 read: IOPS=321, BW=1287KiB/s (1318kB/s)(1288KiB/1001msec) 00:11:06.401 slat (nsec): min=24509, max=42867, avg=25237.48, stdev=2097.29 00:11:06.401 clat (usec): min=1029, max=2070, avg=1299.71, stdev=79.43 00:11:06.401 lat (usec): min=1054, max=2095, avg=1324.95, stdev=79.35 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 1074], 5.00th=[ 1172], 10.00th=[ 1205], 20.00th=[ 1254], 00:11:06.401 | 30.00th=[ 1287], 40.00th=[ 1287], 50.00th=[ 1303], 60.00th=[ 1319], 00:11:06.401 | 70.00th=[ 1336], 80.00th=[ 1352], 90.00th=[ 1369], 95.00th=[ 1385], 00:11:06.401 | 99.00th=[ 1418], 99.50th=[ 1450], 99.90th=[ 2073], 99.95th=[ 2073], 00:11:06.401 | 99.99th=[ 2073] 00:11:06.401 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:06.401 slat (usec): min=10, max=42031, avg=148.04, stdev=1999.63 00:11:06.401 clat (usec): min=697, max=1159, avg=958.54, stdev=71.55 00:11:06.401 lat (usec): min=730, max=43014, avg=1106.59, stdev=2003.94 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 750], 5.00th=[ 840], 10.00th=[ 865], 20.00th=[ 898], 00:11:06.401 | 30.00th=[ 922], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 979], 00:11:06.401 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1057], 00:11:06.401 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1156], 99.95th=[ 1156], 00:11:06.401 | 99.99th=[ 1156] 00:11:06.401 bw ( KiB/s): min= 3528, max= 3528, per=35.49%, avg=3528.00, stdev= 0.00, samples=1 00:11:06.401 iops : min= 882, max= 882, avg=882.00, stdev= 0.00, samples=1 00:11:06.401 lat (usec) : 750=0.72%, 1000=42.81% 00:11:06.401 lat (msec) : 2=56.35%, 4=0.12% 00:11:06.401 cpu : usr=1.50%, sys=2.30%, ctx=837, majf=0, minf=1 00:11:06.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 issued rwts: total=322,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.401 00:11:06.401 Run status group 0 (all jobs): 00:11:06.401 READ: bw=3433KiB/s (3515kB/s), 46.6KiB/s-2138KiB/s (47.7kB/s-2189kB/s), io=3536KiB (3621kB), run=1001-1030msec 00:11:06.401 WRITE: bw=9942KiB/s (10.2MB/s), 1988KiB/s-4092KiB/s (2036kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1030msec 00:11:06.401 00:11:06.401 Disk stats (read/write): 00:11:06.402 nvme0n1: ios=534/767, merge=0/0, ticks=1200/312, in_queue=1512, util=84.27% 00:11:06.402 nvme0n2: ios=35/512, merge=0/0, ticks=1198/497, in_queue=1695, util=88.16% 00:11:06.402 nvme0n3: ios=58/512, merge=0/0, ticks=422/478, in_queue=900, util=95.03% 00:11:06.402 nvme0n4: ios=221/512, merge=0/0, ticks=1155/498, in_queue=1653, util=97.22% 00:11:06.402 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:06.402 [global] 00:11:06.402 thread=1 00:11:06.402 invalidate=1 00:11:06.402 rw=randwrite 00:11:06.402 time_based=1 00:11:06.402 runtime=1 00:11:06.402 ioengine=libaio 00:11:06.402 direct=1 00:11:06.402 bs=4096 00:11:06.402 iodepth=1 00:11:06.402 norandommap=0 00:11:06.402 numjobs=1 00:11:06.402 00:11:06.402 verify_dump=1 00:11:06.402 verify_backlog=512 00:11:06.402 verify_state_save=0 00:11:06.402 do_verify=1 00:11:06.402 verify=crc32c-intel 00:11:06.402 [job0] 00:11:06.402 filename=/dev/nvme0n1 00:11:06.402 [job1] 00:11:06.402 filename=/dev/nvme0n2 00:11:06.402 [job2] 00:11:06.402 filename=/dev/nvme0n3 00:11:06.402 [job3] 00:11:06.402 filename=/dev/nvme0n4 00:11:06.402 Could not set queue depth (nvme0n1) 00:11:06.402 Could not set queue depth (nvme0n2) 00:11:06.402 Could not set queue depth (nvme0n3) 00:11:06.402 Could not set queue depth (nvme0n4) 00:11:06.667 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.667 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.667 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.667 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.667 fio-3.35 00:11:06.667 Starting 4 threads 00:11:08.075 00:11:08.075 job0: (groupid=0, jobs=1): err= 0: pid=133354: Thu Jul 25 15:06:59 2024 00:11:08.075 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:08.075 slat (nsec): min=8119, max=60069, avg=26372.20, stdev=2981.74 00:11:08.075 clat (usec): min=845, max=1332, avg=1105.79, stdev=51.83 00:11:08.075 lat (usec): min=871, max=1358, avg=1132.16, stdev=51.89 00:11:08.075 clat percentiles (usec): 00:11:08.075 | 1.00th=[ 955], 5.00th=[ 1029], 10.00th=[ 1045], 20.00th=[ 1074], 00:11:08.075 | 30.00th=[ 1090], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:11:08.075 | 70.00th=[ 1139], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:11:08.075 | 99.00th=[ 1221], 99.50th=[ 1221], 99.90th=[ 1336], 99.95th=[ 1336], 00:11:08.075 | 99.99th=[ 1336] 00:11:08.075 write: IOPS=534, BW=2138KiB/s (2189kB/s)(2140KiB/1001msec); 0 zone resets 00:11:08.075 slat (nsec): min=4711, max=52583, avg=28837.34, stdev=9194.77 00:11:08.075 clat (usec): min=479, max=3666, avg=740.76, stdev=149.84 00:11:08.075 lat (usec): min=490, max=3698, avg=769.59, stdev=150.70 00:11:08.075 clat percentiles (usec): 00:11:08.075 | 1.00th=[ 578], 5.00th=[ 627], 10.00th=[ 635], 20.00th=[ 693], 00:11:08.075 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 742], 60.00th=[ 750], 00:11:08.075 | 70.00th=[ 766], 80.00th=[ 775], 90.00th=[ 791], 95.00th=[ 807], 00:11:08.075 | 99.00th=[ 1057], 99.50th=[ 1483], 99.90th=[ 3654], 99.95th=[ 3654], 00:11:08.075 | 99.99th=[ 3654] 00:11:08.075 bw ( KiB/s): min= 4096, max= 4096, per=51.13%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.075 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.075 lat (usec) : 500=0.29%, 750=29.51%, 1000=22.06% 00:11:08.075 lat (msec) : 2=48.04%, 4=0.10% 00:11:08.075 cpu : usr=1.00%, sys=3.60%, ctx=1051, majf=0, minf=1 00:11:08.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.075 issued rwts: total=512,535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.075 job1: (groupid=0, jobs=1): err= 0: pid=133355: Thu Jul 25 15:06:59 2024 00:11:08.075 read: IOPS=468, BW=1874KiB/s (1919kB/s)(1876KiB/1001msec) 00:11:08.075 slat (nsec): min=10304, max=44941, avg=26597.47, stdev=2919.59 00:11:08.075 clat (usec): min=1002, max=1383, avg=1223.27, stdev=58.46 00:11:08.075 lat (usec): min=1043, max=1409, avg=1249.87, stdev=58.18 00:11:08.075 clat percentiles (usec): 00:11:08.075 | 1.00th=[ 1057], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1172], 00:11:08.075 | 30.00th=[ 1188], 40.00th=[ 1205], 50.00th=[ 1221], 60.00th=[ 1237], 00:11:08.075 | 70.00th=[ 1254], 80.00th=[ 1270], 90.00th=[ 1287], 95.00th=[ 1319], 00:11:08.075 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1385], 99.95th=[ 1385], 00:11:08.075 | 99.99th=[ 1385] 00:11:08.075 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:08.075 slat (usec): min=9, max=36826, avg=100.74, stdev=1626.28 00:11:08.075 clat (usec): min=412, max=899, avg=692.21, stdev=77.64 00:11:08.075 lat (usec): min=446, max=37476, avg=792.95, stdev=1626.29 00:11:08.075 clat percentiles (usec): 00:11:08.075 | 1.00th=[ 523], 5.00th=[ 562], 10.00th=[ 603], 20.00th=[ 635], 00:11:08.075 | 30.00th=[ 652], 40.00th=[ 660], 50.00th=[ 685], 60.00th=[ 709], 00:11:08.075 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 824], 00:11:08.075 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 898], 99.95th=[ 898], 00:11:08.075 | 99.99th=[ 898] 00:11:08.076 bw ( KiB/s): min= 4096, max= 4096, per=51.13%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.076 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.076 lat (usec) : 500=0.20%, 750=39.14%, 1000=12.84% 00:11:08.076 lat (msec) : 2=47.81% 00:11:08.076 cpu : usr=1.10%, sys=3.20%, ctx=986, majf=0, minf=1 00:11:08.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.076 issued rwts: total=469,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.076 job2: (groupid=0, jobs=1): err= 0: pid=133356: Thu Jul 25 15:06:59 2024 00:11:08.076 read: IOPS=12, BW=50.3KiB/s (51.5kB/s)(52.0KiB/1034msec) 00:11:08.076 slat (nsec): min=25304, max=26081, avg=25644.46, stdev=224.09 00:11:08.076 clat (usec): min=41879, max=43140, avg=42282.27, stdev=508.18 00:11:08.076 lat (usec): min=41904, max=43166, avg=42307.92, stdev=508.28 00:11:08.076 clat percentiles (usec): 00:11:08.076 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:08.076 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:08.076 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:11:08.076 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:08.076 | 99.99th=[43254] 00:11:08.076 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:11:08.076 slat (nsec): min=10334, max=81404, avg=33021.13, stdev=5692.15 00:11:08.076 clat (usec): min=660, max=1175, avg=901.84, stdev=90.83 00:11:08.076 lat (usec): min=678, max=1209, avg=934.86, stdev=90.90 00:11:08.076 clat percentiles (usec): 00:11:08.076 | 1.00th=[ 717], 5.00th=[ 758], 10.00th=[ 791], 20.00th=[ 824], 00:11:08.076 | 30.00th=[ 848], 40.00th=[ 873], 50.00th=[ 906], 60.00th=[ 930], 00:11:08.076 | 70.00th=[ 947], 80.00th=[ 971], 90.00th=[ 1012], 95.00th=[ 1057], 00:11:08.076 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1172], 99.95th=[ 1172], 00:11:08.076 | 99.99th=[ 1172] 00:11:08.076 bw ( KiB/s): min= 4096, max= 4096, per=51.13%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.076 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.076 lat (usec) : 750=3.81%, 1000=80.57% 00:11:08.076 lat (msec) : 2=13.14%, 50=2.48% 00:11:08.076 cpu : usr=0.77%, sys=1.65%, ctx=526, majf=0, minf=1 00:11:08.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.076 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.076 job3: (groupid=0, jobs=1): err= 0: pid=133357: Thu Jul 25 15:06:59 2024 00:11:08.076 read: IOPS=264, BW=1058KiB/s (1083kB/s)(1084KiB/1025msec) 00:11:08.076 slat (nsec): min=6717, max=30877, avg=12156.70, stdev=7230.42 00:11:08.076 clat (usec): min=744, max=42967, avg=1885.22, stdev=5629.29 00:11:08.076 lat (usec): min=760, max=42992, avg=1897.38, stdev=5631.18 00:11:08.076 clat percentiles (usec): 00:11:08.076 | 1.00th=[ 873], 5.00th=[ 971], 10.00th=[ 996], 20.00th=[ 1012], 00:11:08.076 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:11:08.076 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1287], 95.00th=[ 1352], 00:11:08.076 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:08.076 | 99.99th=[42730] 00:11:08.076 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:11:08.076 slat (nsec): min=11397, max=79915, avg=32398.71, stdev=3365.22 00:11:08.076 clat (usec): min=631, max=1505, avg=952.68, stdev=83.45 00:11:08.076 lat (usec): min=663, max=1538, avg=985.08, stdev=83.57 00:11:08.076 clat percentiles (usec): 00:11:08.076 | 1.00th=[ 742], 5.00th=[ 824], 10.00th=[ 848], 20.00th=[ 889], 00:11:08.076 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:11:08.076 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1074], 00:11:08.076 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1500], 99.95th=[ 1500], 00:11:08.076 | 99.99th=[ 1500] 00:11:08.076 bw ( KiB/s): min= 72, max= 4024, per=25.56%, avg=2048.00, stdev=2794.49, samples=2 00:11:08.076 iops : min= 18, max= 1006, avg=512.00, stdev=698.62, samples=2 00:11:08.076 lat (usec) : 750=1.28%, 1000=51.09% 00:11:08.076 lat (msec) : 2=47.00%, 50=0.64% 00:11:08.076 cpu : usr=1.66%, sys=1.27%, ctx=785, majf=0, minf=1 00:11:08.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.076 issued rwts: total=271,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.076 00:11:08.076 Run status group 0 (all jobs): 00:11:08.076 READ: bw=4894KiB/s (5011kB/s), 50.3KiB/s-2046KiB/s (51.5kB/s-2095kB/s), io=5060KiB (5181kB), run=1001-1034msec 00:11:08.076 WRITE: bw=8012KiB/s (8204kB/s), 1981KiB/s-2138KiB/s (2028kB/s-2189kB/s), io=8284KiB (8483kB), run=1001-1034msec 00:11:08.076 00:11:08.076 Disk stats (read/write): 00:11:08.076 nvme0n1: ios=427/512, merge=0/0, ticks=493/370, in_queue=863, util=87.27% 00:11:08.076 nvme0n2: ios=382/512, merge=0/0, ticks=597/341, in_queue=938, util=91.34% 00:11:08.076 nvme0n3: ios=58/512, merge=0/0, ticks=468/444, in_queue=912, util=95.36% 00:11:08.076 nvme0n4: ios=231/512, merge=0/0, ticks=470/494, in_queue=964, util=97.23% 00:11:08.076 15:06:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:08.076 [global] 00:11:08.076 thread=1 00:11:08.076 invalidate=1 00:11:08.076 rw=write 00:11:08.076 time_based=1 00:11:08.076 runtime=1 00:11:08.076 ioengine=libaio 00:11:08.076 direct=1 00:11:08.076 bs=4096 00:11:08.076 iodepth=128 00:11:08.076 norandommap=0 00:11:08.076 numjobs=1 00:11:08.076 00:11:08.076 verify_dump=1 00:11:08.076 verify_backlog=512 00:11:08.076 verify_state_save=0 00:11:08.076 do_verify=1 00:11:08.076 verify=crc32c-intel 00:11:08.076 [job0] 00:11:08.076 filename=/dev/nvme0n1 00:11:08.076 [job1] 00:11:08.076 filename=/dev/nvme0n2 00:11:08.076 [job2] 00:11:08.076 filename=/dev/nvme0n3 00:11:08.076 [job3] 00:11:08.076 filename=/dev/nvme0n4 00:11:08.076 Could not set queue depth (nvme0n1) 00:11:08.076 Could not set queue depth (nvme0n2) 00:11:08.076 Could not set queue depth (nvme0n3) 00:11:08.076 Could not set queue depth (nvme0n4) 00:11:08.336 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.336 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.336 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.336 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.336 fio-3.35 00:11:08.336 Starting 4 threads 00:11:09.756 00:11:09.756 job0: (groupid=0, jobs=1): err= 0: pid=133879: Thu Jul 25 15:07:01 2024 00:11:09.756 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:11:09.756 slat (nsec): min=918, max=10784k, avg=65672.67, stdev=465090.39 00:11:09.756 clat (usec): min=3460, max=28761, avg=8682.59, stdev=3831.72 00:11:09.756 lat (usec): min=3480, max=28769, avg=8748.26, stdev=3851.98 00:11:09.756 clat percentiles (usec): 00:11:09.756 | 1.00th=[ 4686], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 6128], 00:11:09.756 | 30.00th=[ 6718], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7963], 00:11:09.756 | 70.00th=[ 8979], 80.00th=[ 9896], 90.00th=[13304], 95.00th=[15926], 00:11:09.756 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:11:09.756 | 99.99th=[28705] 00:11:09.756 write: IOPS=6822, BW=26.6MiB/s (27.9MB/s)(26.8MiB/1007msec); 0 zone resets 00:11:09.756 slat (nsec): min=1632, max=12349k, avg=76743.95, stdev=437400.88 00:11:09.756 clat (usec): min=1217, max=42504, avg=10176.91, stdev=6088.00 00:11:09.756 lat (usec): min=1227, max=42509, avg=10253.66, stdev=6112.57 00:11:09.756 clat percentiles (usec): 00:11:09.756 | 1.00th=[ 3359], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5866], 00:11:09.756 | 30.00th=[ 6456], 40.00th=[ 7046], 50.00th=[ 7963], 60.00th=[ 9503], 00:11:09.756 | 70.00th=[11469], 80.00th=[14353], 90.00th=[17433], 95.00th=[21103], 00:11:09.756 | 99.00th=[33817], 99.50th=[40109], 99.90th=[42206], 99.95th=[42730], 00:11:09.756 | 99.99th=[42730] 00:11:09.756 bw ( KiB/s): min=26000, max=27944, per=29.20%, avg=26972.00, stdev=1374.62, samples=2 00:11:09.756 iops : min= 6500, max= 6986, avg=6743.00, stdev=343.65, samples=2 00:11:09.756 lat (msec) : 2=0.02%, 4=1.18%, 10=69.95%, 20=24.33%, 50=4.52% 00:11:09.756 cpu : usr=3.68%, sys=6.06%, ctx=587, majf=0, minf=1 00:11:09.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:09.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.756 issued rwts: total=6656,6870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.756 job1: (groupid=0, jobs=1): err= 0: pid=133880: Thu Jul 25 15:07:01 2024 00:11:09.756 read: IOPS=6162, BW=24.1MiB/s (25.2MB/s)(24.1MiB/1002msec) 00:11:09.756 slat (nsec): min=883, max=20751k, avg=77231.41, stdev=652684.88 00:11:09.756 clat (usec): min=1448, max=65809, avg=10220.42, stdev=9353.60 00:11:09.756 lat (usec): min=3960, max=65820, avg=10297.65, stdev=9432.33 00:11:09.756 clat percentiles (usec): 00:11:09.756 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 6915], 00:11:09.756 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:11:09.756 | 70.00th=[ 8291], 80.00th=[ 8979], 90.00th=[11207], 95.00th=[33162], 00:11:09.756 | 99.00th=[52167], 99.50th=[53740], 99.90th=[56361], 99.95th=[61604], 00:11:09.756 | 99.99th=[65799] 00:11:09.756 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:11:09.756 slat (nsec): min=1542, max=8704.5k, avg=75448.98, stdev=424821.49 00:11:09.756 clat (usec): min=3288, max=38113, avg=9571.18, stdev=4548.79 00:11:09.756 lat (usec): min=3290, max=38115, avg=9646.63, stdev=4576.46 00:11:09.756 clat percentiles (usec): 00:11:09.756 | 1.00th=[ 4424], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6718], 00:11:09.756 | 30.00th=[ 7177], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8717], 00:11:09.756 | 70.00th=[ 9896], 80.00th=[11731], 90.00th=[15008], 95.00th=[18482], 00:11:09.756 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[35914], 00:11:09.756 | 99.99th=[38011] 00:11:09.756 bw ( KiB/s): min=20408, max=32080, per=28.41%, avg=26244.00, stdev=8253.35, samples=2 00:11:09.756 iops : min= 5102, max= 8020, avg=6561.00, stdev=2063.34, samples=2 00:11:09.756 lat (msec) : 2=0.01%, 4=0.23%, 10=78.42%, 20=16.90%, 50=3.58% 00:11:09.756 lat (msec) : 100=0.86% 00:11:09.756 cpu : usr=3.50%, sys=3.70%, ctx=734, majf=0, minf=1 00:11:09.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:09.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.756 issued rwts: total=6175,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.756 job2: (groupid=0, jobs=1): err= 0: pid=133881: Thu Jul 25 15:07:01 2024 00:11:09.756 read: IOPS=5357, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1004msec) 00:11:09.756 slat (nsec): min=963, max=12069k, avg=84258.97, stdev=684093.66 00:11:09.756 clat (usec): min=1102, max=34688, avg=12335.38, stdev=4368.41 00:11:09.756 lat (usec): min=3807, max=34693, avg=12419.64, stdev=4412.12 00:11:09.756 clat percentiles (usec): 00:11:09.756 | 1.00th=[ 4621], 5.00th=[ 6652], 10.00th=[ 7701], 20.00th=[ 8848], 00:11:09.756 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11600], 60.00th=[12256], 00:11:09.756 | 70.00th=[13435], 80.00th=[15401], 90.00th=[18744], 95.00th=[20317], 00:11:09.756 | 99.00th=[26346], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:11:09.756 | 99.99th=[34866] 00:11:09.756 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:11:09.756 slat (nsec): min=1668, max=21044k, avg=73497.11, stdev=633137.00 00:11:09.756 clat (usec): min=1556, max=40876, avg=10818.23, stdev=6200.65 00:11:09.756 lat (usec): min=1567, max=40884, avg=10891.73, stdev=6224.70 00:11:09.756 clat percentiles (usec): 00:11:09.756 | 1.00th=[ 2704], 5.00th=[ 4293], 10.00th=[ 5080], 20.00th=[ 5932], 00:11:09.756 | 30.00th=[ 7111], 40.00th=[ 8029], 50.00th=[ 9110], 60.00th=[10159], 00:11:09.756 | 70.00th=[11469], 80.00th=[14746], 90.00th=[20317], 95.00th=[23462], 00:11:09.756 | 99.00th=[31851], 99.50th=[33162], 99.90th=[40633], 99.95th=[40633], 00:11:09.756 | 99.99th=[40633] 00:11:09.756 bw ( KiB/s): min=20488, max=24568, per=24.39%, avg=22528.00, stdev=2885.00, samples=2 00:11:09.756 iops : min= 5122, max= 6142, avg=5632.00, stdev=721.25, samples=2 00:11:09.756 lat (msec) : 2=0.14%, 4=1.49%, 10=43.47%, 20=46.53%, 50=8.38% 00:11:09.756 cpu : usr=4.09%, sys=6.38%, ctx=337, majf=0, minf=1 00:11:09.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:09.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.756 issued rwts: total=5379,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.757 job3: (groupid=0, jobs=1): err= 0: pid=133882: Thu Jul 25 15:07:01 2024 00:11:09.757 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:11:09.757 slat (nsec): min=893, max=15701k, avg=88871.96, stdev=745108.65 00:11:09.757 clat (usec): min=2107, max=48509, avg=13465.22, stdev=7494.66 00:11:09.757 lat (usec): min=2131, max=48517, avg=13554.10, stdev=7551.93 00:11:09.757 clat percentiles (usec): 00:11:09.757 | 1.00th=[ 3130], 5.00th=[ 5735], 10.00th=[ 7373], 20.00th=[ 8455], 00:11:09.757 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11207], 60.00th=[12256], 00:11:09.757 | 70.00th=[14091], 80.00th=[17695], 90.00th=[22938], 95.00th=[28705], 00:11:09.757 | 99.00th=[45876], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:11:09.757 | 99.99th=[48497] 00:11:09.757 write: IOPS=4082, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:11:09.757 slat (nsec): min=1610, max=9324.8k, avg=113625.80, stdev=618634.35 00:11:09.757 clat (usec): min=1418, max=74074, avg=17594.95, stdev=13556.20 00:11:09.757 lat (usec): min=1426, max=74084, avg=17708.58, stdev=13632.01 00:11:09.757 clat percentiles (usec): 00:11:09.757 | 1.00th=[ 1909], 5.00th=[ 2802], 10.00th=[ 4555], 20.00th=[ 7504], 00:11:09.757 | 30.00th=[10945], 40.00th=[12649], 50.00th=[14353], 60.00th=[18220], 00:11:09.757 | 70.00th=[20055], 80.00th=[22676], 90.00th=[30802], 95.00th=[46400], 00:11:09.757 | 99.00th=[69731], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:11:09.757 | 99.99th=[73925] 00:11:09.757 bw ( KiB/s): min=12288, max=20480, per=17.74%, avg=16384.00, stdev=5792.62, samples=2 00:11:09.757 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:11:09.757 lat (msec) : 2=0.63%, 4=4.93%, 10=26.54%, 20=46.20%, 50=19.59% 00:11:09.757 lat (msec) : 100=2.11% 00:11:09.757 cpu : usr=3.29%, sys=4.79%, ctx=459, majf=0, minf=1 00:11:09.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:09.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.757 issued rwts: total=4096,4099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.757 00:11:09.757 Run status group 0 (all jobs): 00:11:09.757 READ: bw=86.5MiB/s (90.7MB/s), 15.9MiB/s-25.8MiB/s (16.7MB/s-27.1MB/s), io=87.1MiB (91.4MB), run=1002-1007msec 00:11:09.757 WRITE: bw=90.2MiB/s (94.6MB/s), 15.9MiB/s-26.6MiB/s (16.7MB/s-27.9MB/s), io=90.8MiB (95.3MB), run=1002-1007msec 00:11:09.757 00:11:09.757 Disk stats (read/write): 00:11:09.757 nvme0n1: ios=5153/5275, merge=0/0, ticks=44183/51048, in_queue=95231, util=97.80% 00:11:09.757 nvme0n2: ios=4633/4751, merge=0/0, ticks=22086/20701, in_queue=42787, util=97.61% 00:11:09.757 nvme0n3: ios=4138/4185, merge=0/0, ticks=48640/43122, in_queue=91762, util=97.32% 00:11:09.757 nvme0n4: ios=3200/3584, merge=0/0, ticks=41766/50594, in_queue=92360, util=87.83% 00:11:09.757 15:07:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:09.757 [global] 00:11:09.757 thread=1 00:11:09.757 invalidate=1 00:11:09.757 rw=randwrite 00:11:09.757 time_based=1 00:11:09.757 runtime=1 00:11:09.757 ioengine=libaio 00:11:09.757 direct=1 00:11:09.757 bs=4096 00:11:09.757 iodepth=128 00:11:09.757 norandommap=0 00:11:09.757 numjobs=1 00:11:09.757 00:11:09.757 verify_dump=1 00:11:09.757 verify_backlog=512 00:11:09.757 verify_state_save=0 00:11:09.757 do_verify=1 00:11:09.757 verify=crc32c-intel 00:11:09.757 [job0] 00:11:09.757 filename=/dev/nvme0n1 00:11:09.757 [job1] 00:11:09.757 filename=/dev/nvme0n2 00:11:09.757 [job2] 00:11:09.757 filename=/dev/nvme0n3 00:11:09.757 [job3] 00:11:09.757 filename=/dev/nvme0n4 00:11:09.757 Could not set queue depth (nvme0n1) 00:11:09.757 Could not set queue depth (nvme0n2) 00:11:09.757 Could not set queue depth (nvme0n3) 00:11:09.757 Could not set queue depth (nvme0n4) 00:11:10.019 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.019 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.019 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.019 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.019 fio-3.35 00:11:10.019 Starting 4 threads 00:11:11.402 00:11:11.402 job0: (groupid=0, jobs=1): err= 0: pid=134402: Thu Jul 25 15:07:03 2024 00:11:11.402 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:11:11.402 slat (nsec): min=928, max=14410k, avg=94531.35, stdev=685578.99 00:11:11.402 clat (usec): min=4150, max=38324, avg=12656.79, stdev=6584.09 00:11:11.402 lat (usec): min=4600, max=38331, avg=12751.32, stdev=6631.20 00:11:11.402 clat percentiles (usec): 00:11:11.402 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7242], 00:11:11.402 | 30.00th=[ 7963], 40.00th=[ 9241], 50.00th=[10552], 60.00th=[12125], 00:11:11.402 | 70.00th=[14615], 80.00th=[17433], 90.00th=[21627], 95.00th=[26870], 00:11:11.402 | 99.00th=[34341], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:11:11.402 | 99.99th=[38536] 00:11:11.402 write: IOPS=5973, BW=23.3MiB/s (24.5MB/s)(23.4MiB/1005msec); 0 zone resets 00:11:11.402 slat (nsec): min=1580, max=8240.4k, avg=73127.88, stdev=473714.71 00:11:11.402 clat (usec): min=1050, max=21071, avg=9260.56, stdev=3181.76 00:11:11.402 lat (usec): min=1059, max=21079, avg=9333.68, stdev=3191.98 00:11:11.402 clat percentiles (usec): 00:11:11.402 | 1.00th=[ 3523], 5.00th=[ 4359], 10.00th=[ 5211], 20.00th=[ 6456], 00:11:11.402 | 30.00th=[ 7177], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[10028], 00:11:11.402 | 70.00th=[10945], 80.00th=[11994], 90.00th=[13698], 95.00th=[14877], 00:11:11.402 | 99.00th=[17433], 99.50th=[17695], 99.90th=[21103], 99.95th=[21103], 00:11:11.402 | 99.99th=[21103] 00:11:11.402 bw ( KiB/s): min=20480, max=26528, per=29.10%, avg=23504.00, stdev=4276.58, samples=2 00:11:11.402 iops : min= 5120, max= 6632, avg=5876.00, stdev=1069.15, samples=2 00:11:11.402 lat (msec) : 2=0.03%, 4=1.67%, 10=52.11%, 20=39.83%, 50=6.37% 00:11:11.402 cpu : usr=3.69%, sys=5.28%, ctx=498, majf=0, minf=1 00:11:11.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:11.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.402 issued rwts: total=5632,6003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.402 job1: (groupid=0, jobs=1): err= 0: pid=134403: Thu Jul 25 15:07:03 2024 00:11:11.402 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:11:11.402 slat (nsec): min=909, max=17384k, avg=98555.30, stdev=582579.68 00:11:11.403 clat (usec): min=5567, max=42753, avg=12924.10, stdev=7019.44 00:11:11.403 lat (usec): min=5573, max=42762, avg=13022.66, stdev=7050.70 00:11:11.403 clat percentiles (usec): 00:11:11.403 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8160], 00:11:11.403 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[10814], 60.00th=[11863], 00:11:11.403 | 70.00th=[12780], 80.00th=[16712], 90.00th=[20317], 95.00th=[27919], 00:11:11.403 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:11:11.403 | 99.99th=[42730] 00:11:11.403 write: IOPS=4623, BW=18.1MiB/s (18.9MB/s)(18.2MiB/1006msec); 0 zone resets 00:11:11.403 slat (nsec): min=1537, max=8714.6k, avg=113724.47, stdev=442434.97 00:11:11.403 clat (usec): min=2047, max=34251, avg=14413.61, stdev=5034.68 00:11:11.403 lat (usec): min=5382, max=34835, avg=14527.34, stdev=5064.60 00:11:11.403 clat percentiles (usec): 00:11:11.403 | 1.00th=[ 6259], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[ 9503], 00:11:11.403 | 30.00th=[10945], 40.00th=[12518], 50.00th=[13698], 60.00th=[15008], 00:11:11.403 | 70.00th=[16909], 80.00th=[19268], 90.00th=[21890], 95.00th=[23725], 00:11:11.403 | 99.00th=[26346], 99.50th=[26870], 99.90th=[28443], 99.95th=[34341], 00:11:11.403 | 99.99th=[34341] 00:11:11.403 bw ( KiB/s): min=14352, max=22467, per=22.79%, avg=18409.50, stdev=5738.17, samples=2 00:11:11.403 iops : min= 3588, max= 5616, avg=4602.00, stdev=1434.01, samples=2 00:11:11.403 lat (msec) : 4=0.01%, 10=33.52%, 20=53.25%, 50=13.22% 00:11:11.403 cpu : usr=3.18%, sys=3.08%, ctx=927, majf=0, minf=1 00:11:11.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:11.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.403 issued rwts: total=4608,4651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.403 job2: (groupid=0, jobs=1): err= 0: pid=134405: Thu Jul 25 15:07:03 2024 00:11:11.403 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:11:11.403 slat (nsec): min=925, max=16650k, avg=91069.63, stdev=692688.90 00:11:11.403 clat (usec): min=1710, max=39354, avg=13747.11, stdev=5751.54 00:11:11.403 lat (usec): min=1736, max=39364, avg=13838.18, stdev=5783.66 00:11:11.403 clat percentiles (usec): 00:11:11.403 | 1.00th=[ 2040], 5.00th=[ 5014], 10.00th=[ 7635], 20.00th=[ 9372], 00:11:11.403 | 30.00th=[10421], 40.00th=[11338], 50.00th=[12518], 60.00th=[13829], 00:11:11.403 | 70.00th=[16057], 80.00th=[18744], 90.00th=[22414], 95.00th=[24773], 00:11:11.403 | 99.00th=[26870], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:11:11.403 | 99.99th=[39584] 00:11:11.403 write: IOPS=5027, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1005msec); 0 zone resets 00:11:11.403 slat (nsec): min=1524, max=12066k, avg=91456.20, stdev=572527.66 00:11:11.403 clat (usec): min=1184, max=37958, avg=12725.49, stdev=5206.19 00:11:11.403 lat (usec): min=1193, max=37967, avg=12816.94, stdev=5223.53 00:11:11.403 clat percentiles (usec): 00:11:11.403 | 1.00th=[ 3228], 5.00th=[ 5211], 10.00th=[ 6849], 20.00th=[ 8586], 00:11:11.403 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11731], 60.00th=[13304], 00:11:11.403 | 70.00th=[15008], 80.00th=[16319], 90.00th=[19006], 95.00th=[22938], 00:11:11.403 | 99.00th=[28181], 99.50th=[30540], 99.90th=[36963], 99.95th=[36963], 00:11:11.403 | 99.99th=[38011] 00:11:11.403 bw ( KiB/s): min=19504, max=19904, per=24.39%, avg=19704.00, stdev=282.84, samples=2 00:11:11.403 iops : min= 4876, max= 4976, avg=4926.00, stdev=70.71, samples=2 00:11:11.403 lat (msec) : 2=0.59%, 4=1.88%, 10=26.80%, 20=58.45%, 50=12.28% 00:11:11.403 cpu : usr=3.39%, sys=4.68%, ctx=609, majf=0, minf=1 00:11:11.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:11.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.403 issued rwts: total=4608,5053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.403 job3: (groupid=0, jobs=1): err= 0: pid=134406: Thu Jul 25 15:07:03 2024 00:11:11.403 read: IOPS=4268, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1004msec) 00:11:11.403 slat (nsec): min=956, max=15353k, avg=86228.91, stdev=708619.22 00:11:11.403 clat (usec): min=1600, max=45087, avg=13900.03, stdev=6370.23 00:11:11.403 lat (usec): min=1623, max=46122, avg=13986.26, stdev=6406.19 00:11:11.403 clat percentiles (usec): 00:11:11.403 | 1.00th=[ 2507], 5.00th=[ 4015], 10.00th=[ 4555], 20.00th=[ 8586], 00:11:11.403 | 30.00th=[10683], 40.00th=[11863], 50.00th=[14222], 60.00th=[16188], 00:11:11.403 | 70.00th=[17957], 80.00th=[18744], 90.00th=[21627], 95.00th=[23987], 00:11:11.403 | 99.00th=[28181], 99.50th=[31851], 99.90th=[42730], 99.95th=[42730], 00:11:11.403 | 99.99th=[44827] 00:11:11.403 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:11:11.403 slat (nsec): min=1599, max=10352k, avg=107421.15, stdev=654266.09 00:11:11.403 clat (usec): min=1086, max=72698, avg=14540.61, stdev=11079.28 00:11:11.403 lat (usec): min=1119, max=72707, avg=14648.03, stdev=11150.51 00:11:11.403 clat percentiles (usec): 00:11:11.403 | 1.00th=[ 2147], 5.00th=[ 4113], 10.00th=[ 6128], 20.00th=[ 8848], 00:11:11.403 | 30.00th=[10028], 40.00th=[11207], 50.00th=[11994], 60.00th=[13042], 00:11:11.403 | 70.00th=[14484], 80.00th=[16450], 90.00th=[20579], 95.00th=[42206], 00:11:11.403 | 99.00th=[64226], 99.50th=[66847], 99.90th=[72877], 99.95th=[72877], 00:11:11.403 | 99.99th=[72877] 00:11:11.403 bw ( KiB/s): min=16752, max=20112, per=22.82%, avg=18432.00, stdev=2375.88, samples=2 00:11:11.403 iops : min= 4188, max= 5028, avg=4608.00, stdev=593.97, samples=2 00:11:11.403 lat (msec) : 2=0.63%, 4=4.18%, 10=23.63%, 20=57.93%, 50=12.03% 00:11:11.403 lat (msec) : 100=1.60% 00:11:11.403 cpu : usr=3.19%, sys=5.48%, ctx=389, majf=0, minf=1 00:11:11.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:11.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.403 issued rwts: total=4286,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.403 00:11:11.403 Run status group 0 (all jobs): 00:11:11.403 READ: bw=74.3MiB/s (77.9MB/s), 16.7MiB/s-21.9MiB/s (17.5MB/s-23.0MB/s), io=74.7MiB (78.4MB), run=1004-1006msec 00:11:11.403 WRITE: bw=78.9MiB/s (82.7MB/s), 17.9MiB/s-23.3MiB/s (18.8MB/s-24.5MB/s), io=79.4MiB (83.2MB), run=1004-1006msec 00:11:11.403 00:11:11.403 Disk stats (read/write): 00:11:11.403 nvme0n1: ios=4005/4096, merge=0/0, ticks=55606/37966, in_queue=93572, util=87.07% 00:11:11.403 nvme0n2: ios=3613/3954, merge=0/0, ticks=12025/17179, in_queue=29204, util=90.95% 00:11:11.403 nvme0n3: ios=3640/3847, merge=0/0, ticks=41854/43178, in_queue=85032, util=92.39% 00:11:11.403 nvme0n4: ios=3131/3087, merge=0/0, ticks=44559/45586, in_queue=90145, util=98.62% 00:11:11.403 15:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:11.403 15:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=134743 00:11:11.403 15:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:11.403 15:07:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:11.403 [global] 00:11:11.403 thread=1 00:11:11.403 invalidate=1 00:11:11.403 rw=read 00:11:11.403 time_based=1 00:11:11.403 runtime=10 00:11:11.403 ioengine=libaio 00:11:11.403 direct=1 00:11:11.403 bs=4096 00:11:11.403 iodepth=1 00:11:11.403 norandommap=1 00:11:11.403 numjobs=1 00:11:11.403 00:11:11.403 [job0] 00:11:11.403 filename=/dev/nvme0n1 00:11:11.403 [job1] 00:11:11.403 filename=/dev/nvme0n2 00:11:11.403 [job2] 00:11:11.403 filename=/dev/nvme0n3 00:11:11.403 [job3] 00:11:11.403 filename=/dev/nvme0n4 00:11:11.403 Could not set queue depth (nvme0n1) 00:11:11.403 Could not set queue depth (nvme0n2) 00:11:11.403 Could not set queue depth (nvme0n3) 00:11:11.403 Could not set queue depth (nvme0n4) 00:11:11.664 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.664 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.664 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.664 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.664 fio-3.35 00:11:11.664 Starting 4 threads 00:11:14.211 15:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:14.472 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1241088, buflen=4096 00:11:14.472 fio: pid=134930, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:14.472 15:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:14.733 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8404992, buflen=4096 00:11:14.733 fio: pid=134929, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:14.733 15:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.733 15:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:14.733 15:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.733 15:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:14.733 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=376832, buflen=4096 00:11:14.733 fio: pid=134927, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:14.994 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.994 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:14.994 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=339968, buflen=4096 00:11:14.994 fio: pid=134928, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:14.994 00:11:14.994 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=134927: Thu Jul 25 15:07:07 2024 00:11:14.994 read: IOPS=31, BW=125KiB/s (128kB/s)(368KiB/2950msec) 00:11:14.994 slat (usec): min=7, max=13476, avg=169.56, stdev=1394.91 00:11:14.994 clat (usec): min=780, max=43057, avg=31654.64, stdev=18111.21 00:11:14.994 lat (usec): min=819, max=55972, avg=31825.78, stdev=18253.70 00:11:14.994 clat percentiles (usec): 00:11:14.994 | 1.00th=[ 783], 5.00th=[ 1287], 10.00th=[ 1352], 20.00th=[ 1450], 00:11:14.994 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:14.994 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:11:14.994 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:14.994 | 99.99th=[43254] 00:11:14.994 bw ( KiB/s): min= 96, max= 272, per=4.02%, avg=131.20, stdev=78.71, samples=5 00:11:14.994 iops : min= 24, max= 68, avg=32.80, stdev=19.68, samples=5 00:11:14.994 lat (usec) : 1000=1.08% 00:11:14.994 lat (msec) : 2=24.73%, 50=73.12% 00:11:14.994 cpu : usr=0.00%, sys=0.14%, ctx=98, majf=0, minf=1 00:11:14.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.994 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.994 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.994 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=134928: Thu Jul 25 15:07:07 2024 00:11:14.994 read: IOPS=27, BW=107KiB/s (109kB/s)(332KiB/3105msec) 00:11:14.994 slat (usec): min=8, max=17450, avg=286.02, stdev=1955.63 00:11:14.994 clat (usec): min=1139, max=43239, avg=36858.26, stdev=13979.02 00:11:14.994 lat (usec): min=1165, max=59983, avg=37147.42, stdev=14224.31 00:11:14.994 clat percentiles (usec): 00:11:14.994 | 1.00th=[ 1139], 5.00th=[ 1319], 10.00th=[ 1385], 20.00th=[41681], 00:11:14.994 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:14.994 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:11:14.994 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:14.994 | 99.99th=[43254] 00:11:14.994 bw ( KiB/s): min= 88, max= 184, per=3.28%, avg=107.83, stdev=37.50, samples=6 00:11:14.994 iops : min= 22, max= 46, avg=26.83, stdev= 9.43, samples=6 00:11:14.994 lat (msec) : 2=13.10%, 50=85.71% 00:11:14.994 cpu : usr=0.00%, sys=0.16%, ctx=87, majf=0, minf=1 00:11:14.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.995 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.995 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.995 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=134929: Thu Jul 25 15:07:07 2024 00:11:14.995 read: IOPS=749, BW=2996KiB/s (3068kB/s)(8208KiB/2740msec) 00:11:14.995 slat (usec): min=26, max=19957, avg=44.49, stdev=546.13 00:11:14.995 clat (usec): min=722, max=1495, avg=1273.80, stdev=59.77 00:11:14.995 lat (usec): min=750, max=21286, avg=1318.30, stdev=551.12 00:11:14.995 clat percentiles (usec): 00:11:14.995 | 1.00th=[ 1090], 5.00th=[ 1172], 10.00th=[ 1205], 20.00th=[ 1237], 00:11:14.995 | 30.00th=[ 1254], 40.00th=[ 1270], 50.00th=[ 1287], 60.00th=[ 1287], 00:11:14.995 | 70.00th=[ 1303], 80.00th=[ 1319], 90.00th=[ 1336], 95.00th=[ 1352], 00:11:14.995 | 99.00th=[ 1401], 99.50th=[ 1418], 99.90th=[ 1467], 99.95th=[ 1467], 00:11:14.995 | 99.99th=[ 1500] 00:11:14.995 bw ( KiB/s): min= 3048, max= 3080, per=94.01%, avg=3064.00, stdev=12.65, samples=5 00:11:14.995 iops : min= 762, max= 770, avg=766.00, stdev= 3.16, samples=5 00:11:14.995 lat (usec) : 750=0.05%, 1000=0.24% 00:11:14.995 lat (msec) : 2=99.66% 00:11:14.995 cpu : usr=1.31%, sys=3.14%, ctx=2058, majf=0, minf=1 00:11:14.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.995 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.995 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.995 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=134930: Thu Jul 25 15:07:07 2024 00:11:14.995 read: IOPS=118, BW=471KiB/s (482kB/s)(1212KiB/2573msec) 00:11:14.995 slat (nsec): min=7429, max=55164, avg=25122.50, stdev=4055.38 00:11:14.995 clat (usec): min=1056, max=43055, avg=8386.34, stdev=15492.26 00:11:14.995 lat (usec): min=1065, max=43081, avg=8411.46, stdev=15492.62 00:11:14.995 clat percentiles (usec): 00:11:14.995 | 1.00th=[ 1074], 5.00th=[ 1172], 10.00th=[ 1237], 20.00th=[ 1287], 00:11:14.995 | 30.00th=[ 1319], 40.00th=[ 1336], 50.00th=[ 1369], 60.00th=[ 1418], 00:11:14.995 | 70.00th=[ 1450], 80.00th=[ 1532], 90.00th=[42206], 95.00th=[42730], 00:11:14.995 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:14.995 | 99.99th=[43254] 00:11:14.995 bw ( KiB/s): min= 88, max= 1368, per=14.76%, avg=481.60, stdev=573.48, samples=5 00:11:14.995 iops : min= 22, max= 342, avg=120.40, stdev=143.37, samples=5 00:11:14.995 lat (msec) : 2=82.57%, 50=17.11% 00:11:14.995 cpu : usr=0.12%, sys=0.35%, ctx=305, majf=0, minf=2 00:11:14.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.995 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.995 issued rwts: total=304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.995 00:11:14.995 Run status group 0 (all jobs): 00:11:14.995 READ: bw=3259KiB/s (3337kB/s), 107KiB/s-2996KiB/s (109kB/s-3068kB/s), io=9.88MiB (10.4MB), run=2573-3105msec 00:11:14.995 00:11:14.995 Disk stats (read/write): 00:11:14.995 nvme0n1: ios=125/0, merge=0/0, ticks=3747/0, in_queue=3747, util=98.63% 00:11:14.995 nvme0n2: ios=83/0, merge=0/0, ticks=3059/0, in_queue=3059, util=95.01% 00:11:14.995 nvme0n3: ios=2021/0, merge=0/0, ticks=3462/0, in_queue=3462, util=99.37% 00:11:14.995 nvme0n4: ios=332/0, merge=0/0, ticks=3043/0, in_queue=3043, util=100.00% 00:11:15.255 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.255 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:15.255 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.255 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:15.515 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.515 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:15.775 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.775 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:15.775 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:15.775 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 134743 00:11:15.775 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:15.775 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.036 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.036 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:16.036 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:16.036 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.036 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:16.036 15:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:16.036 nvmf hotplug test: fio failed as expected 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.036 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.036 rmmod nvme_tcp 00:11:16.036 rmmod nvme_fabrics 00:11:16.296 rmmod nvme_keyring 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 131154 ']' 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 131154 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 131154 ']' 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 131154 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131154 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131154' 00:11:16.296 killing process with pid 131154 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 131154 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 131154 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.296 15:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:18.841 00:11:18.841 real 0m28.258s 00:11:18.841 user 2m38.700s 00:11:18.841 sys 0m8.891s 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.841 ************************************ 00:11:18.841 END TEST nvmf_fio_target 00:11:18.841 ************************************ 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.841 ************************************ 00:11:18.841 START TEST nvmf_bdevio 00:11:18.841 ************************************ 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:18.841 * Looking for test storage... 00:11:18.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.841 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:18.842 15:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:25.475 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:25.475 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:25.475 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.475 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:25.475 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.476 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.733 ms 00:11:25.738 00:11:25.738 --- 10.0.0.2 ping statistics --- 00:11:25.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.738 rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:11:25.738 00:11:25.738 --- 10.0.0.1 ping statistics --- 00:11:25.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.738 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.738 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=139958 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 139958 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 139958 ']' 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.000 15:07:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 [2024-07-25 15:07:18.019850] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:26.000 [2024-07-25 15:07:18.019915] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.000 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.000 [2024-07-25 15:07:18.109849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.261 [2024-07-25 15:07:18.203857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.261 [2024-07-25 15:07:18.203919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.261 [2024-07-25 15:07:18.203927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.261 [2024-07-25 15:07:18.203934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.261 [2024-07-25 15:07:18.203940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.262 [2024-07-25 15:07:18.204104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:26.262 [2024-07-25 15:07:18.204266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:26.262 [2024-07-25 15:07:18.204429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.262 [2024-07-25 15:07:18.204429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.835 [2024-07-25 15:07:18.878928] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.835 Malloc0 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.835 [2024-07-25 15:07:18.928238] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:26.835 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:26.836 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:26.836 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:26.836 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:26.836 { 00:11:26.836 "params": { 00:11:26.836 "name": "Nvme$subsystem", 00:11:26.836 "trtype": "$TEST_TRANSPORT", 00:11:26.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:26.836 "adrfam": "ipv4", 00:11:26.836 "trsvcid": "$NVMF_PORT", 00:11:26.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:26.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:26.836 "hdgst": ${hdgst:-false}, 00:11:26.836 "ddgst": ${ddgst:-false} 00:11:26.836 }, 00:11:26.836 "method": "bdev_nvme_attach_controller" 00:11:26.836 } 00:11:26.836 EOF 00:11:26.836 )") 00:11:26.836 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:26.836 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:26.836 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:26.836 15:07:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:26.836 "params": { 00:11:26.836 "name": "Nvme1", 00:11:26.836 "trtype": "tcp", 00:11:26.836 "traddr": "10.0.0.2", 00:11:26.836 "adrfam": "ipv4", 00:11:26.836 "trsvcid": "4420", 00:11:26.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:26.836 "hdgst": false, 00:11:26.836 "ddgst": false 00:11:26.836 }, 00:11:26.836 "method": "bdev_nvme_attach_controller" 00:11:26.836 }' 00:11:26.836 [2024-07-25 15:07:18.983563] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:26.836 [2024-07-25 15:07:18.983629] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140237 ] 00:11:26.836 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.097 [2024-07-25 15:07:19.049984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.097 [2024-07-25 15:07:19.125819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.097 [2024-07-25 15:07:19.125937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.097 [2024-07-25 15:07:19.125940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.358 I/O targets: 00:11:27.358 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:27.358 00:11:27.358 00:11:27.358 CUnit - A unit testing framework for C - Version 2.1-3 00:11:27.358 http://cunit.sourceforge.net/ 00:11:27.358 00:11:27.358 00:11:27.358 Suite: bdevio tests on: Nvme1n1 00:11:27.358 Test: blockdev write read block ...passed 00:11:27.358 Test: blockdev write zeroes read block ...passed 00:11:27.358 Test: blockdev write zeroes read no split ...passed 00:11:27.358 Test: blockdev write zeroes read split ...passed 00:11:27.619 Test: blockdev write zeroes read split partial ...passed 00:11:27.619 Test: blockdev reset ...[2024-07-25 15:07:19.602375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:27.619 [2024-07-25 15:07:19.602436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1777ce0 (9): Bad file descriptor 00:11:27.619 [2024-07-25 15:07:19.620722] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:27.619 passed 00:11:27.619 Test: blockdev write read 8 blocks ...passed 00:11:27.619 Test: blockdev write read size > 128k ...passed 00:11:27.619 Test: blockdev write read invalid size ...passed 00:11:27.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:27.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:27.619 Test: blockdev write read max offset ...passed 00:11:27.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:27.619 Test: blockdev writev readv 8 blocks ...passed 00:11:27.619 Test: blockdev writev readv 30 x 1block ...passed 00:11:27.619 Test: blockdev writev readv block ...passed 00:11:27.880 Test: blockdev writev readv size > 128k ...passed 00:11:27.880 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:27.880 Test: blockdev comparev and writev ...[2024-07-25 15:07:19.854490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.880 [2024-07-25 15:07:19.854515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.854526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.880 [2024-07-25 15:07:19.854532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.855162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.880 [2024-07-25 15:07:19.855171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.855180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.880 [2024-07-25 15:07:19.855185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.855834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.880 [2024-07-25 15:07:19.855843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.855856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.880 [2024-07-25 15:07:19.855861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.856512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.880 [2024-07-25 15:07:19.856521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.856531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.880 [2024-07-25 15:07:19.856536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:27.880 passed 00:11:27.880 Test: blockdev nvme passthru rw ...passed 00:11:27.880 Test: blockdev nvme passthru vendor specific ...[2024-07-25 15:07:19.942243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.880 [2024-07-25 15:07:19.942254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.942829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.880 [2024-07-25 15:07:19.942837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.943386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.880 [2024-07-25 15:07:19.943395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:27.880 [2024-07-25 15:07:19.943982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.880 [2024-07-25 15:07:19.943990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:27.880 passed 00:11:27.880 Test: blockdev nvme admin passthru ...passed 00:11:27.880 Test: blockdev copy ...passed 00:11:27.880 00:11:27.880 Run Summary: Type Total Ran Passed Failed Inactive 00:11:27.880 suites 1 1 n/a 0 0 00:11:27.880 tests 23 23 23 0 0 00:11:27.880 asserts 152 152 152 0 n/a 00:11:27.880 00:11:27.880 Elapsed time = 1.230 seconds 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:28.142 rmmod nvme_tcp 00:11:28.142 rmmod nvme_fabrics 00:11:28.142 rmmod nvme_keyring 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 139958 ']' 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 139958 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 139958 ']' 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 139958 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139958 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139958' 00:11:28.142 killing process with pid 139958 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 139958 00:11:28.142 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 139958 00:11:28.403 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:28.404 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:28.404 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:28.404 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.404 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:28.404 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.404 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.404 15:07:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.319 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:30.319 00:11:30.319 real 0m11.884s 00:11:30.319 user 0m13.324s 00:11:30.319 sys 0m5.877s 00:11:30.319 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.319 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.319 ************************************ 00:11:30.319 END TEST nvmf_bdevio 00:11:30.319 ************************************ 00:11:30.580 15:07:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:30.580 00:11:30.580 real 4m54.880s 00:11:30.580 user 11m43.152s 00:11:30.581 sys 1m42.649s 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.581 ************************************ 00:11:30.581 END TEST nvmf_target_core 00:11:30.581 ************************************ 00:11:30.581 15:07:22 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:30.581 15:07:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:30.581 15:07:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.581 15:07:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.581 ************************************ 00:11:30.581 START TEST nvmf_target_extra 00:11:30.581 ************************************ 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:30.581 * Looking for test storage... 00:11:30.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.581 15:07:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.844 ************************************ 00:11:30.844 START TEST nvmf_example 00:11:30.844 ************************************ 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:30.844 * Looking for test storage... 00:11:30.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.844 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.845 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:38.993 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:38.994 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:38.994 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:38.994 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:38.994 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.994 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:38.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:11:38.994 00:11:38.994 --- 10.0.0.2 ping statistics --- 00:11:38.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.994 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:11:38.994 00:11:38.994 --- 10.0.0.1 ping statistics --- 00:11:38.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.994 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:38.994 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=144779 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 144779 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 144779 ']' 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.995 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.995 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.995 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:39.256 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:39.256 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.492 Initializing NVMe Controllers 00:11:51.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:51.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:51.492 Initialization complete. Launching workers. 00:11:51.492 ======================================================== 00:11:51.492 Latency(us) 00:11:51.492 Device Information : IOPS MiB/s Average min max 00:11:51.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14625.62 57.13 4375.37 847.16 15774.09 00:11:51.492 ======================================================== 00:11:51.492 Total : 14625.62 57.13 4375.37 847.16 15774.09 00:11:51.492 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:51.492 rmmod nvme_tcp 00:11:51.492 rmmod nvme_fabrics 00:11:51.492 rmmod nvme_keyring 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 144779 ']' 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 144779 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 144779 ']' 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 144779 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 144779 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 144779' 00:11:51.492 killing process with pid 144779 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 144779 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 144779 00:11:51.492 nvmf threads initialize successfully 00:11:51.492 bdev subsystem init successfully 00:11:51.492 created a nvmf target service 00:11:51.492 create targets's poll groups done 00:11:51.492 all subsystems of target started 00:11:51.492 nvmf target is running 00:11:51.492 all subsystems of target stopped 00:11:51.492 destroy targets's poll groups done 00:11:51.492 destroyed the nvmf target service 00:11:51.492 bdev subsystem finish successfully 00:11:51.492 nvmf threads destroy successfully 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.492 15:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.752 15:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:51.752 15:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:51.752 15:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.752 15:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:52.016 00:11:52.016 real 0m21.151s 00:11:52.016 user 0m46.910s 00:11:52.016 sys 0m6.563s 00:11:52.016 15:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.016 15:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:52.016 ************************************ 00:11:52.016 END TEST nvmf_example 00:11:52.016 ************************************ 00:11:52.016 15:07:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:52.016 15:07:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:52.016 15:07:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.016 15:07:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.016 ************************************ 00:11:52.016 START TEST nvmf_filesystem 00:11:52.016 ************************************ 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:52.016 * Looking for test storage... 00:11:52.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:52.016 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:52.017 #define SPDK_CONFIG_H 00:11:52.017 #define SPDK_CONFIG_APPS 1 00:11:52.017 #define SPDK_CONFIG_ARCH native 00:11:52.017 #undef SPDK_CONFIG_ASAN 00:11:52.017 #undef SPDK_CONFIG_AVAHI 00:11:52.017 #undef SPDK_CONFIG_CET 00:11:52.017 #define SPDK_CONFIG_COVERAGE 1 00:11:52.017 #define SPDK_CONFIG_CROSS_PREFIX 00:11:52.017 #undef SPDK_CONFIG_CRYPTO 00:11:52.017 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:52.017 #undef SPDK_CONFIG_CUSTOMOCF 00:11:52.017 #undef SPDK_CONFIG_DAOS 00:11:52.017 #define SPDK_CONFIG_DAOS_DIR 00:11:52.017 #define SPDK_CONFIG_DEBUG 1 00:11:52.017 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:52.017 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:52.017 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:52.017 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:52.017 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:52.017 #undef SPDK_CONFIG_DPDK_UADK 00:11:52.017 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:52.017 #define SPDK_CONFIG_EXAMPLES 1 00:11:52.017 #undef SPDK_CONFIG_FC 00:11:52.017 #define SPDK_CONFIG_FC_PATH 00:11:52.017 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:52.017 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:52.017 #undef SPDK_CONFIG_FUSE 00:11:52.017 #undef SPDK_CONFIG_FUZZER 00:11:52.017 #define SPDK_CONFIG_FUZZER_LIB 00:11:52.017 #undef SPDK_CONFIG_GOLANG 00:11:52.017 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:52.017 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:52.017 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:52.017 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:52.017 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:52.017 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:52.017 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:52.017 #define SPDK_CONFIG_IDXD 1 00:11:52.017 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:52.017 #undef SPDK_CONFIG_IPSEC_MB 00:11:52.017 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:52.017 #define SPDK_CONFIG_ISAL 1 00:11:52.017 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:52.017 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:52.017 #define SPDK_CONFIG_LIBDIR 00:11:52.017 #undef SPDK_CONFIG_LTO 00:11:52.017 #define SPDK_CONFIG_MAX_LCORES 128 00:11:52.017 #define SPDK_CONFIG_NVME_CUSE 1 00:11:52.017 #undef SPDK_CONFIG_OCF 00:11:52.017 #define SPDK_CONFIG_OCF_PATH 00:11:52.017 #define SPDK_CONFIG_OPENSSL_PATH 00:11:52.017 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:52.017 #define SPDK_CONFIG_PGO_DIR 00:11:52.017 #undef SPDK_CONFIG_PGO_USE 00:11:52.017 #define SPDK_CONFIG_PREFIX /usr/local 00:11:52.017 #undef SPDK_CONFIG_RAID5F 00:11:52.017 #undef SPDK_CONFIG_RBD 00:11:52.017 #define SPDK_CONFIG_RDMA 1 00:11:52.017 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:52.017 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:52.017 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:52.017 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:52.017 #define SPDK_CONFIG_SHARED 1 00:11:52.017 #undef SPDK_CONFIG_SMA 00:11:52.017 #define SPDK_CONFIG_TESTS 1 00:11:52.017 #undef SPDK_CONFIG_TSAN 00:11:52.017 #define SPDK_CONFIG_UBLK 1 00:11:52.017 #define SPDK_CONFIG_UBSAN 1 00:11:52.017 #undef SPDK_CONFIG_UNIT_TESTS 00:11:52.017 #undef SPDK_CONFIG_URING 00:11:52.017 #define SPDK_CONFIG_URING_PATH 00:11:52.017 #undef SPDK_CONFIG_URING_ZNS 00:11:52.017 #undef SPDK_CONFIG_USDT 00:11:52.017 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:52.017 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:52.017 #define SPDK_CONFIG_VFIO_USER 1 00:11:52.017 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:52.017 #define SPDK_CONFIG_VHOST 1 00:11:52.017 #define SPDK_CONFIG_VIRTIO 1 00:11:52.017 #undef SPDK_CONFIG_VTUNE 00:11:52.017 #define SPDK_CONFIG_VTUNE_DIR 00:11:52.017 #define SPDK_CONFIG_WERROR 1 00:11:52.017 #define SPDK_CONFIG_WPDK_DIR 00:11:52.017 #undef SPDK_CONFIG_XNVME 00:11:52.017 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.017 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:52.018 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.019 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:52.282 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j144 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 147566 ]] 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 147566 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.uHMsPy 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.uHMsPy/tests/target /tmp/spdk.uHMsPy 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=954236928 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330192896 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=118552576000 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=129370976256 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10818400256 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64623304704 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685486080 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=25850851328 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=25874198528 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23347200 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=efivarfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=efivarfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=216064 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=507904 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=287744 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64683704320 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685490176 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1785856 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:52.283 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12937093120 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12937097216 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:52.284 * Looking for test storage... 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=118552576000 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=13032992768 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:52.284 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:52.285 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.285 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.285 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.285 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:52.285 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:52.285 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:52.285 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.462 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:00.463 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:00.463 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:00.463 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:00.463 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:00.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:12:00.463 00:12:00.463 --- 10.0.0.2 ping statistics --- 00:12:00.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.463 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:12:00.463 00:12:00.463 --- 10.0.0.1 ping statistics --- 00:12:00.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.463 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.463 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 ************************************ 00:12:00.464 START TEST nvmf_filesystem_no_in_capsule 00:12:00.464 ************************************ 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=151190 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 151190 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 151190 ']' 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.464 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 [2024-07-25 15:07:51.624335] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:00.464 [2024-07-25 15:07:51.624395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.464 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.464 [2024-07-25 15:07:51.696167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.464 [2024-07-25 15:07:51.772381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.464 [2024-07-25 15:07:51.772421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.464 [2024-07-25 15:07:51.772433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.464 [2024-07-25 15:07:51.772440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.464 [2024-07-25 15:07:51.772445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.464 [2024-07-25 15:07:51.772583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.464 [2024-07-25 15:07:51.772722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.464 [2024-07-25 15:07:51.772878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.464 [2024-07-25 15:07:51.772879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 [2024-07-25 15:07:52.452166] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 Malloc1 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 [2024-07-25 15:07:52.578293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.464 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:00.464 { 00:12:00.464 "name": "Malloc1", 00:12:00.465 "aliases": [ 00:12:00.465 "fbdcf612-61d3-419a-a84f-b32d073bfac8" 00:12:00.465 ], 00:12:00.465 "product_name": "Malloc disk", 00:12:00.465 "block_size": 512, 00:12:00.465 "num_blocks": 1048576, 00:12:00.465 "uuid": "fbdcf612-61d3-419a-a84f-b32d073bfac8", 00:12:00.465 "assigned_rate_limits": { 00:12:00.465 "rw_ios_per_sec": 0, 00:12:00.465 "rw_mbytes_per_sec": 0, 00:12:00.465 "r_mbytes_per_sec": 0, 00:12:00.465 "w_mbytes_per_sec": 0 00:12:00.465 }, 00:12:00.465 "claimed": true, 00:12:00.465 "claim_type": "exclusive_write", 00:12:00.465 "zoned": false, 00:12:00.465 "supported_io_types": { 00:12:00.465 "read": true, 00:12:00.465 "write": true, 00:12:00.465 "unmap": true, 00:12:00.465 "flush": true, 00:12:00.465 "reset": true, 00:12:00.465 "nvme_admin": false, 00:12:00.465 "nvme_io": false, 00:12:00.465 "nvme_io_md": false, 00:12:00.465 "write_zeroes": true, 00:12:00.465 "zcopy": true, 00:12:00.465 "get_zone_info": false, 00:12:00.465 "zone_management": false, 00:12:00.465 "zone_append": false, 00:12:00.465 "compare": false, 00:12:00.465 "compare_and_write": false, 00:12:00.465 "abort": true, 00:12:00.465 "seek_hole": false, 00:12:00.465 "seek_data": false, 00:12:00.465 "copy": true, 00:12:00.465 "nvme_iov_md": false 00:12:00.465 }, 00:12:00.465 "memory_domains": [ 00:12:00.465 { 00:12:00.465 "dma_device_id": "system", 00:12:00.465 "dma_device_type": 1 00:12:00.465 }, 00:12:00.465 { 00:12:00.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.465 "dma_device_type": 2 00:12:00.465 } 00:12:00.465 ], 00:12:00.465 "driver_specific": {} 00:12:00.465 } 00:12:00.465 ]' 00:12:00.465 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:00.725 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:00.725 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:00.725 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:00.725 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:00.725 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:00.725 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:00.725 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.111 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.111 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.111 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.111 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:02.111 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.027 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.027 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.027 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.027 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:04.027 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.027 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:04.289 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:04.289 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:04.289 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:04.289 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:04.289 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:04.289 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:04.289 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:04.290 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:04.290 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:04.290 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:04.290 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:04.551 15:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:05.124 15:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.067 ************************************ 00:12:06.067 START TEST filesystem_ext4 00:12:06.067 ************************************ 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:06.067 mke2fs 1.46.5 (30-Dec-2021) 00:12:06.067 Discarding device blocks: 0/522240 done 00:12:06.067 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:06.067 Filesystem UUID: 117b7608-3652-451d-8d77-28a5300f5143 00:12:06.067 Superblock backups stored on blocks: 00:12:06.067 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:06.067 00:12:06.067 Allocating group tables: 0/64 done 00:12:06.067 Writing inode tables: 0/64 done 00:12:06.067 Creating journal (8192 blocks): done 00:12:06.067 Writing superblocks and filesystem accounting information: 0/64 done 00:12:06.067 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:06.067 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 151190 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.640 00:12:06.640 real 0m0.599s 00:12:06.640 user 0m0.025s 00:12:06.640 sys 0m0.073s 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:06.640 ************************************ 00:12:06.640 END TEST filesystem_ext4 00:12:06.640 ************************************ 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.640 ************************************ 00:12:06.640 START TEST filesystem_btrfs 00:12:06.640 ************************************ 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:06.640 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:06.901 btrfs-progs v6.6.2 00:12:06.901 See https://btrfs.readthedocs.io for more information. 00:12:06.901 00:12:06.901 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:06.901 NOTE: several default settings have changed in version 5.15, please make sure 00:12:06.901 this does not affect your deployments: 00:12:06.901 - DUP for metadata (-m dup) 00:12:06.901 - enabled no-holes (-O no-holes) 00:12:06.901 - enabled free-space-tree (-R free-space-tree) 00:12:06.901 00:12:06.901 Label: (null) 00:12:06.901 UUID: 5c1886bc-1226-44c7-9e79-9f2724f32226 00:12:06.901 Node size: 16384 00:12:06.901 Sector size: 4096 00:12:06.901 Filesystem size: 510.00MiB 00:12:06.901 Block group profiles: 00:12:06.901 Data: single 8.00MiB 00:12:06.901 Metadata: DUP 32.00MiB 00:12:06.901 System: DUP 8.00MiB 00:12:06.901 SSD detected: yes 00:12:06.901 Zoned device: no 00:12:06.901 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:06.901 Runtime features: free-space-tree 00:12:06.901 Checksum: crc32c 00:12:06.901 Number of devices: 1 00:12:06.901 Devices: 00:12:06.901 ID SIZE PATH 00:12:06.901 1 510.00MiB /dev/nvme0n1p1 00:12:06.901 00:12:06.901 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:06.901 15:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.162 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.162 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:07.162 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.162 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 151190 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.423 00:12:07.423 real 0m0.663s 00:12:07.423 user 0m0.021s 00:12:07.423 sys 0m0.139s 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:07.423 ************************************ 00:12:07.423 END TEST filesystem_btrfs 00:12:07.423 ************************************ 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.423 ************************************ 00:12:07.423 START TEST filesystem_xfs 00:12:07.423 ************************************ 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:07.423 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:07.423 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:07.423 = sectsz=512 attr=2, projid32bit=1 00:12:07.423 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:07.423 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:07.423 data = bsize=4096 blocks=130560, imaxpct=25 00:12:07.423 = sunit=0 swidth=0 blks 00:12:07.423 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:07.423 log =internal log bsize=4096 blocks=16384, version=2 00:12:07.423 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:07.424 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:08.809 Discarding blocks...Done. 00:12:08.809 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:08.809 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 151190 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.357 00:12:11.357 real 0m3.871s 00:12:11.357 user 0m0.026s 00:12:11.357 sys 0m0.080s 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:11.357 ************************************ 00:12:11.357 END TEST filesystem_xfs 00:12:11.357 ************************************ 00:12:11.357 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:11.617 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:11.617 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.617 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.617 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:11.617 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:11.617 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.617 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:11.617 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 151190 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 151190 ']' 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 151190 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 151190 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 151190' 00:12:11.879 killing process with pid 151190 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 151190 00:12:11.879 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 151190 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:12.141 00:12:12.141 real 0m12.555s 00:12:12.141 user 0m49.335s 00:12:12.141 sys 0m1.287s 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 ************************************ 00:12:12.141 END TEST nvmf_filesystem_no_in_capsule 00:12:12.141 ************************************ 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 ************************************ 00:12:12.141 START TEST nvmf_filesystem_in_capsule 00:12:12.141 ************************************ 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=153940 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 153940 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 153940 ']' 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.141 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 [2024-07-25 15:08:04.257197] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:12.141 [2024-07-25 15:08:04.257270] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.141 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.141 [2024-07-25 15:08:04.325812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.403 [2024-07-25 15:08:04.397477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.403 [2024-07-25 15:08:04.397515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.403 [2024-07-25 15:08:04.397524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.403 [2024-07-25 15:08:04.397531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.403 [2024-07-25 15:08:04.397537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.403 [2024-07-25 15:08:04.397676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.403 [2024-07-25 15:08:04.397789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.403 [2024-07-25 15:08:04.397945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.403 [2024-07-25 15:08:04.397946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.975 [2024-07-25 15:08:05.086188] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.975 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 Malloc1 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 [2024-07-25 15:08:05.219326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.236 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:13.236 { 00:12:13.236 "name": "Malloc1", 00:12:13.236 "aliases": [ 00:12:13.236 "1406f126-f93d-4b32-bb2e-40dc97830390" 00:12:13.236 ], 00:12:13.236 "product_name": "Malloc disk", 00:12:13.236 "block_size": 512, 00:12:13.236 "num_blocks": 1048576, 00:12:13.236 "uuid": "1406f126-f93d-4b32-bb2e-40dc97830390", 00:12:13.236 "assigned_rate_limits": { 00:12:13.236 "rw_ios_per_sec": 0, 00:12:13.236 "rw_mbytes_per_sec": 0, 00:12:13.236 "r_mbytes_per_sec": 0, 00:12:13.236 "w_mbytes_per_sec": 0 00:12:13.236 }, 00:12:13.236 "claimed": true, 00:12:13.236 "claim_type": "exclusive_write", 00:12:13.236 "zoned": false, 00:12:13.236 "supported_io_types": { 00:12:13.236 "read": true, 00:12:13.236 "write": true, 00:12:13.236 "unmap": true, 00:12:13.236 "flush": true, 00:12:13.236 "reset": true, 00:12:13.236 "nvme_admin": false, 00:12:13.236 "nvme_io": false, 00:12:13.236 "nvme_io_md": false, 00:12:13.236 "write_zeroes": true, 00:12:13.236 "zcopy": true, 00:12:13.236 "get_zone_info": false, 00:12:13.236 "zone_management": false, 00:12:13.236 "zone_append": false, 00:12:13.236 "compare": false, 00:12:13.236 "compare_and_write": false, 00:12:13.236 "abort": true, 00:12:13.236 "seek_hole": false, 00:12:13.236 "seek_data": false, 00:12:13.236 "copy": true, 00:12:13.236 "nvme_iov_md": false 00:12:13.236 }, 00:12:13.237 "memory_domains": [ 00:12:13.237 { 00:12:13.237 "dma_device_id": "system", 00:12:13.237 "dma_device_type": 1 00:12:13.237 }, 00:12:13.237 { 00:12:13.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.237 "dma_device_type": 2 00:12:13.237 } 00:12:13.237 ], 00:12:13.237 "driver_specific": {} 00:12:13.237 } 00:12:13.237 ]' 00:12:13.237 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:13.237 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:13.237 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:13.237 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:13.237 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:13.237 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:13.237 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:13.237 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.151 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.151 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:15.151 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.151 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:15.151 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:17.093 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:17.093 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:17.666 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:18.608 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:18.608 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.609 ************************************ 00:12:18.609 START TEST filesystem_in_capsule_ext4 00:12:18.609 ************************************ 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:18.609 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:18.869 mke2fs 1.46.5 (30-Dec-2021) 00:12:18.869 Discarding device blocks: 0/522240 done 00:12:18.869 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:18.869 Filesystem UUID: d5fac097-2fec-4eb6-a1e3-c0b8a5fb4ec9 00:12:18.869 Superblock backups stored on blocks: 00:12:18.869 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:18.869 00:12:18.869 Allocating group tables: 0/64 done 00:12:18.869 Writing inode tables: 0/64 done 00:12:19.130 Creating journal (8192 blocks): done 00:12:20.073 Writing superblocks and filesystem accounting information: 0/64 done 00:12:20.073 00:12:20.073 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:20.073 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 153940 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:20.657 00:12:20.657 real 0m2.037s 00:12:20.657 user 0m0.035s 00:12:20.657 sys 0m0.065s 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.657 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:20.657 ************************************ 00:12:20.657 END TEST filesystem_in_capsule_ext4 00:12:20.657 ************************************ 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.917 ************************************ 00:12:20.917 START TEST filesystem_in_capsule_btrfs 00:12:20.917 ************************************ 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:20.917 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:20.918 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:21.178 btrfs-progs v6.6.2 00:12:21.178 See https://btrfs.readthedocs.io for more information. 00:12:21.178 00:12:21.178 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:21.178 NOTE: several default settings have changed in version 5.15, please make sure 00:12:21.178 this does not affect your deployments: 00:12:21.178 - DUP for metadata (-m dup) 00:12:21.178 - enabled no-holes (-O no-holes) 00:12:21.178 - enabled free-space-tree (-R free-space-tree) 00:12:21.178 00:12:21.178 Label: (null) 00:12:21.178 UUID: bc084815-6df5-4017-a727-408900d5be17 00:12:21.178 Node size: 16384 00:12:21.178 Sector size: 4096 00:12:21.178 Filesystem size: 510.00MiB 00:12:21.178 Block group profiles: 00:12:21.178 Data: single 8.00MiB 00:12:21.178 Metadata: DUP 32.00MiB 00:12:21.178 System: DUP 8.00MiB 00:12:21.178 SSD detected: yes 00:12:21.178 Zoned device: no 00:12:21.178 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:21.178 Runtime features: free-space-tree 00:12:21.178 Checksum: crc32c 00:12:21.178 Number of devices: 1 00:12:21.178 Devices: 00:12:21.178 ID SIZE PATH 00:12:21.178 1 510.00MiB /dev/nvme0n1p1 00:12:21.178 00:12:21.178 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:21.178 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.442 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.442 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:21.442 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.442 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:21.442 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:21.442 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.442 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 153940 00:12:21.442 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.442 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.702 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.702 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.702 00:12:21.702 real 0m0.737s 00:12:21.702 user 0m0.023s 00:12:21.702 sys 0m0.137s 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:21.703 ************************************ 00:12:21.703 END TEST filesystem_in_capsule_btrfs 00:12:21.703 ************************************ 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.703 ************************************ 00:12:21.703 START TEST filesystem_in_capsule_xfs 00:12:21.703 ************************************ 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:21.703 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:21.703 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:21.703 = sectsz=512 attr=2, projid32bit=1 00:12:21.703 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:21.703 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:21.703 data = bsize=4096 blocks=130560, imaxpct=25 00:12:21.703 = sunit=0 swidth=0 blks 00:12:21.703 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:21.703 log =internal log bsize=4096 blocks=16384, version=2 00:12:21.703 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:21.703 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:22.646 Discarding blocks...Done. 00:12:22.646 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:22.646 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 153940 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.194 00:12:25.194 real 0m3.219s 00:12:25.194 user 0m0.027s 00:12:25.194 sys 0m0.079s 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.194 ************************************ 00:12:25.194 END TEST filesystem_in_capsule_xfs 00:12:25.194 ************************************ 00:12:25.194 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:25.194 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:25.194 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.194 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.194 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:25.194 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:25.194 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.194 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:25.194 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.194 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 153940 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 153940 ']' 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 153940 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 153940 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 153940' 00:12:25.455 killing process with pid 153940 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 153940 00:12:25.455 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 153940 00:12:25.716 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:25.716 00:12:25.716 real 0m13.490s 00:12:25.716 user 0m53.209s 00:12:25.716 sys 0m1.212s 00:12:25.716 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.716 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.716 ************************************ 00:12:25.716 END TEST nvmf_filesystem_in_capsule 00:12:25.716 ************************************ 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.717 rmmod nvme_tcp 00:12:25.717 rmmod nvme_fabrics 00:12:25.717 rmmod nvme_keyring 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.717 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.265 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.265 00:12:28.265 real 0m35.856s 00:12:28.265 user 1m44.732s 00:12:28.265 sys 0m8.042s 00:12:28.265 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.265 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:28.265 ************************************ 00:12:28.265 END TEST nvmf_filesystem 00:12:28.265 ************************************ 00:12:28.265 15:08:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:28.265 15:08:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:28.265 15:08:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.265 15:08:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.265 ************************************ 00:12:28.265 START TEST nvmf_target_discovery 00:12:28.265 ************************************ 00:12:28.265 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:28.265 * Looking for test storage... 00:12:28.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.265 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:36.415 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:36.415 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:36.415 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:36.415 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:36.415 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:36.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:12:36.416 00:12:36.416 --- 10.0.0.2 ping statistics --- 00:12:36.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.416 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:12:36.416 00:12:36.416 --- 10.0.0.1 ping statistics --- 00:12:36.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.416 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=160973 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 160973 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 160973 ']' 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:36.416 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.416 [2024-07-25 15:08:27.516640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:36.416 [2024-07-25 15:08:27.516709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.416 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.416 [2024-07-25 15:08:27.587147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.416 [2024-07-25 15:08:27.662033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.416 [2024-07-25 15:08:27.662072] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.416 [2024-07-25 15:08:27.662080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.416 [2024-07-25 15:08:27.662086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.416 [2024-07-25 15:08:27.662092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.416 [2024-07-25 15:08:27.662233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.416 [2024-07-25 15:08:27.662462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.416 [2024-07-25 15:08:27.662463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.416 [2024-07-25 15:08:27.662311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.416 [2024-07-25 15:08:28.344206] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.416 Null1 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.416 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 [2024-07-25 15:08:28.404520] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 Null2 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 Null3 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 Null4 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:36.679 00:12:36.679 Discovery Log Number of Records 6, Generation counter 6 00:12:36.679 =====Discovery Log Entry 0====== 00:12:36.679 trtype: tcp 00:12:36.679 adrfam: ipv4 00:12:36.679 subtype: current discovery subsystem 00:12:36.679 treq: not required 00:12:36.679 portid: 0 00:12:36.679 trsvcid: 4420 00:12:36.679 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:36.679 traddr: 10.0.0.2 00:12:36.679 eflags: explicit discovery connections, duplicate discovery information 00:12:36.679 sectype: none 00:12:36.679 =====Discovery Log Entry 1====== 00:12:36.679 trtype: tcp 00:12:36.679 adrfam: ipv4 00:12:36.679 subtype: nvme subsystem 00:12:36.679 treq: not required 00:12:36.679 portid: 0 00:12:36.679 trsvcid: 4420 00:12:36.679 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:36.679 traddr: 10.0.0.2 00:12:36.679 eflags: none 00:12:36.679 sectype: none 00:12:36.679 =====Discovery Log Entry 2====== 00:12:36.679 trtype: tcp 00:12:36.679 adrfam: ipv4 00:12:36.679 subtype: nvme subsystem 00:12:36.679 treq: not required 00:12:36.679 portid: 0 00:12:36.679 trsvcid: 4420 00:12:36.679 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:36.679 traddr: 10.0.0.2 00:12:36.679 eflags: none 00:12:36.679 sectype: none 00:12:36.679 =====Discovery Log Entry 3====== 00:12:36.679 trtype: tcp 00:12:36.679 adrfam: ipv4 00:12:36.679 subtype: nvme subsystem 00:12:36.679 treq: not required 00:12:36.679 portid: 0 00:12:36.679 trsvcid: 4420 00:12:36.679 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:36.679 traddr: 10.0.0.2 00:12:36.679 eflags: none 00:12:36.679 sectype: none 00:12:36.679 =====Discovery Log Entry 4====== 00:12:36.679 trtype: tcp 00:12:36.679 adrfam: ipv4 00:12:36.679 subtype: nvme subsystem 00:12:36.679 treq: not required 00:12:36.679 portid: 0 00:12:36.679 trsvcid: 4420 00:12:36.679 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:36.679 traddr: 10.0.0.2 00:12:36.679 eflags: none 00:12:36.679 sectype: none 00:12:36.680 =====Discovery Log Entry 5====== 00:12:36.680 trtype: tcp 00:12:36.680 adrfam: ipv4 00:12:36.680 subtype: discovery subsystem referral 00:12:36.680 treq: not required 00:12:36.680 portid: 0 00:12:36.680 trsvcid: 4430 00:12:36.680 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:36.680 traddr: 10.0.0.2 00:12:36.680 eflags: none 00:12:36.680 sectype: none 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:36.680 Perform nvmf subsystem discovery via RPC 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 [ 00:12:36.680 { 00:12:36.680 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:36.680 "subtype": "Discovery", 00:12:36.680 "listen_addresses": [ 00:12:36.680 { 00:12:36.680 "trtype": "TCP", 00:12:36.680 "adrfam": "IPv4", 00:12:36.680 "traddr": "10.0.0.2", 00:12:36.680 "trsvcid": "4420" 00:12:36.680 } 00:12:36.680 ], 00:12:36.680 "allow_any_host": true, 00:12:36.680 "hosts": [] 00:12:36.680 }, 00:12:36.680 { 00:12:36.680 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.680 "subtype": "NVMe", 00:12:36.680 "listen_addresses": [ 00:12:36.680 { 00:12:36.680 "trtype": "TCP", 00:12:36.680 "adrfam": "IPv4", 00:12:36.680 "traddr": "10.0.0.2", 00:12:36.680 "trsvcid": "4420" 00:12:36.680 } 00:12:36.680 ], 00:12:36.680 "allow_any_host": true, 00:12:36.680 "hosts": [], 00:12:36.680 "serial_number": "SPDK00000000000001", 00:12:36.680 "model_number": "SPDK bdev Controller", 00:12:36.680 "max_namespaces": 32, 00:12:36.680 "min_cntlid": 1, 00:12:36.680 "max_cntlid": 65519, 00:12:36.680 "namespaces": [ 00:12:36.680 { 00:12:36.680 "nsid": 1, 00:12:36.680 "bdev_name": "Null1", 00:12:36.680 "name": "Null1", 00:12:36.680 "nguid": "E26C9449C4214E6E8AD377E08204E540", 00:12:36.680 "uuid": "e26c9449-c421-4e6e-8ad3-77e08204e540" 00:12:36.680 } 00:12:36.680 ] 00:12:36.680 }, 00:12:36.680 { 00:12:36.680 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:36.680 "subtype": "NVMe", 00:12:36.680 "listen_addresses": [ 00:12:36.680 { 00:12:36.680 "trtype": "TCP", 00:12:36.680 "adrfam": "IPv4", 00:12:36.680 "traddr": "10.0.0.2", 00:12:36.680 "trsvcid": "4420" 00:12:36.680 } 00:12:36.680 ], 00:12:36.680 "allow_any_host": true, 00:12:36.680 "hosts": [], 00:12:36.680 "serial_number": "SPDK00000000000002", 00:12:36.680 "model_number": "SPDK bdev Controller", 00:12:36.680 "max_namespaces": 32, 00:12:36.680 "min_cntlid": 1, 00:12:36.680 "max_cntlid": 65519, 00:12:36.680 "namespaces": [ 00:12:36.680 { 00:12:36.680 "nsid": 1, 00:12:36.680 "bdev_name": "Null2", 00:12:36.680 "name": "Null2", 00:12:36.680 "nguid": "B9B5A37CC0A74FD8A9F5EA3FD0CEE2CC", 00:12:36.680 "uuid": "b9b5a37c-c0a7-4fd8-a9f5-ea3fd0cee2cc" 00:12:36.680 } 00:12:36.680 ] 00:12:36.680 }, 00:12:36.680 { 00:12:36.680 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:36.680 "subtype": "NVMe", 00:12:36.680 "listen_addresses": [ 00:12:36.680 { 00:12:36.680 "trtype": "TCP", 00:12:36.680 "adrfam": "IPv4", 00:12:36.680 "traddr": "10.0.0.2", 00:12:36.680 "trsvcid": "4420" 00:12:36.680 } 00:12:36.680 ], 00:12:36.680 "allow_any_host": true, 00:12:36.680 "hosts": [], 00:12:36.680 "serial_number": "SPDK00000000000003", 00:12:36.680 "model_number": "SPDK bdev Controller", 00:12:36.680 "max_namespaces": 32, 00:12:36.680 "min_cntlid": 1, 00:12:36.680 "max_cntlid": 65519, 00:12:36.680 "namespaces": [ 00:12:36.680 { 00:12:36.680 "nsid": 1, 00:12:36.680 "bdev_name": "Null3", 00:12:36.680 "name": "Null3", 00:12:36.680 "nguid": "2F448F7616D34D6195BF0D427149E679", 00:12:36.680 "uuid": "2f448f76-16d3-4d61-95bf-0d427149e679" 00:12:36.680 } 00:12:36.680 ] 00:12:36.680 }, 00:12:36.680 { 00:12:36.680 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:36.680 "subtype": "NVMe", 00:12:36.680 "listen_addresses": [ 00:12:36.680 { 00:12:36.680 "trtype": "TCP", 00:12:36.680 "adrfam": "IPv4", 00:12:36.680 "traddr": "10.0.0.2", 00:12:36.680 "trsvcid": "4420" 00:12:36.680 } 00:12:36.680 ], 00:12:36.680 "allow_any_host": true, 00:12:36.680 "hosts": [], 00:12:36.680 "serial_number": "SPDK00000000000004", 00:12:36.680 "model_number": "SPDK bdev Controller", 00:12:36.680 "max_namespaces": 32, 00:12:36.680 "min_cntlid": 1, 00:12:36.680 "max_cntlid": 65519, 00:12:36.680 "namespaces": [ 00:12:36.680 { 00:12:36.680 "nsid": 1, 00:12:36.680 "bdev_name": "Null4", 00:12:36.680 "name": "Null4", 00:12:36.680 "nguid": "552679404FCB429D845DAE98F6AE26B8", 00:12:36.680 "uuid": "55267940-4fcb-429d-845d-ae98f6ae26b8" 00:12:36.680 } 00:12:36.680 ] 00:12:36.680 } 00:12:36.680 ] 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.680 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.681 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.942 rmmod nvme_tcp 00:12:36.942 rmmod nvme_fabrics 00:12:36.942 rmmod nvme_keyring 00:12:36.942 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.943 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:36.943 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:36.943 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 160973 ']' 00:12:36.943 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 160973 00:12:36.943 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 160973 ']' 00:12:36.943 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 160973 00:12:36.943 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:36.943 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:36.943 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 160973 00:12:36.943 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:36.943 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:36.943 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 160973' 00:12:36.943 killing process with pid 160973 00:12:36.943 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 160973 00:12:36.943 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 160973 00:12:37.204 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.204 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.204 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.204 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.204 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.204 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.204 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.204 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.175 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.175 00:12:39.175 real 0m11.253s 00:12:39.175 user 0m8.156s 00:12:39.175 sys 0m5.872s 00:12:39.176 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.176 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.176 ************************************ 00:12:39.176 END TEST nvmf_target_discovery 00:12:39.176 ************************************ 00:12:39.176 15:08:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:39.176 15:08:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:39.176 15:08:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.176 15:08:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.176 ************************************ 00:12:39.176 START TEST nvmf_referrals 00:12:39.176 ************************************ 00:12:39.176 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:39.437 * Looking for test storage... 00:12:39.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.437 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:39.438 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:46.031 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:46.031 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.031 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:46.031 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:46.032 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.032 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.294 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.294 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.294 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.294 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.294 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.294 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.294 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:12:46.294 00:12:46.294 --- 10.0.0.2 ping statistics --- 00:12:46.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.294 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:12:46.294 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.585 ms 00:12:46.294 00:12:46.294 --- 10.0.0.1 ping statistics --- 00:12:46.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.294 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=165335 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 165335 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 165335 ']' 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.555 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.555 [2024-07-25 15:08:38.559862] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:46.555 [2024-07-25 15:08:38.559911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.555 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.555 [2024-07-25 15:08:38.618958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.555 [2024-07-25 15:08:38.685050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.555 [2024-07-25 15:08:38.685085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.555 [2024-07-25 15:08:38.685092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.555 [2024-07-25 15:08:38.685099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.555 [2024-07-25 15:08:38.685104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.555 [2024-07-25 15:08:38.685256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.555 [2024-07-25 15:08:38.685459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.555 [2024-07-25 15:08:38.685462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.555 [2024-07-25 15:08:38.685329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 [2024-07-25 15:08:39.406346] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 [2024-07-25 15:08:39.419742] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.499 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.759 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.760 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.021 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:48.021 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.282 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.543 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:48.805 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:48.805 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:48.805 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:48.805 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:48.805 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.805 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.067 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:49.067 rmmod nvme_tcp 00:12:49.067 rmmod nvme_fabrics 00:12:49.067 rmmod nvme_keyring 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 165335 ']' 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 165335 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 165335 ']' 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 165335 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165335 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165335' 00:12:49.329 killing process with pid 165335 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 165335 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 165335 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.329 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.878 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:51.878 00:12:51.878 real 0m12.222s 00:12:51.878 user 0m14.016s 00:12:51.879 sys 0m5.883s 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.879 ************************************ 00:12:51.879 END TEST nvmf_referrals 00:12:51.879 ************************************ 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.879 ************************************ 00:12:51.879 START TEST nvmf_connect_disconnect 00:12:51.879 ************************************ 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:51.879 * Looking for test storage... 00:12:51.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:51.879 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:58.471 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:58.471 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:58.471 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:58.471 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.471 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:58.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:12:58.733 00:12:58.733 --- 10.0.0.2 ping statistics --- 00:12:58.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.733 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:12:58.733 00:12:58.733 --- 10.0.0.1 ping statistics --- 00:12:58.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.733 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.733 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=170095 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 170095 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 170095 ']' 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.994 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:58.994 [2024-07-25 15:08:51.023748] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:58.994 [2024-07-25 15:08:51.023801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.994 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.994 [2024-07-25 15:08:51.091391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.994 [2024-07-25 15:08:51.160277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.994 [2024-07-25 15:08:51.160313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.994 [2024-07-25 15:08:51.160321] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.994 [2024-07-25 15:08:51.160328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.994 [2024-07-25 15:08:51.160334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.994 [2024-07-25 15:08:51.160470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.994 [2024-07-25 15:08:51.160583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.994 [2024-07-25 15:08:51.160739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.994 [2024-07-25 15:08:51.160740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.937 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.937 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.938 [2024-07-25 15:08:51.838149] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.938 [2024-07-25 15:08:51.897570] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:59.938 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:04.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.315 rmmod nvme_tcp 00:13:18.315 rmmod nvme_fabrics 00:13:18.315 rmmod nvme_keyring 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 170095 ']' 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 170095 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 170095 ']' 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 170095 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 170095 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 170095' 00:13:18.315 killing process with pid 170095 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 170095 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 170095 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.315 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:20.865 00:13:20.865 real 0m28.938s 00:13:20.865 user 1m18.914s 00:13:20.865 sys 0m6.607s 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.865 ************************************ 00:13:20.865 END TEST nvmf_connect_disconnect 00:13:20.865 ************************************ 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.865 ************************************ 00:13:20.865 START TEST nvmf_multitarget 00:13:20.865 ************************************ 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:20.865 * Looking for test storage... 00:13:20.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.865 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:20.866 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.459 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:27.460 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:27.460 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:27.460 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:27.460 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:13:27.460 00:13:27.460 --- 10.0.0.2 ping statistics --- 00:13:27.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.460 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.471 ms 00:13:27.460 00:13:27.460 --- 10.0.0.1 ping statistics --- 00:13:27.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.460 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=178583 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 178583 00:13:27.460 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.461 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 178583 ']' 00:13:27.461 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.461 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.461 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.461 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.461 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:27.722 [2024-07-25 15:09:19.691571] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:27.722 [2024-07-25 15:09:19.691637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.722 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.722 [2024-07-25 15:09:19.765831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.722 [2024-07-25 15:09:19.841117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.722 [2024-07-25 15:09:19.841158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.722 [2024-07-25 15:09:19.841166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.722 [2024-07-25 15:09:19.841173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.722 [2024-07-25 15:09:19.841179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.722 [2024-07-25 15:09:19.841266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.722 [2024-07-25 15:09:19.841390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.722 [2024-07-25 15:09:19.841547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.722 [2024-07-25 15:09:19.841548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.295 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.295 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:28.295 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.295 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:28.295 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:28.556 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.556 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:28.556 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:28.556 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:28.556 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:28.556 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:28.556 "nvmf_tgt_1" 00:13:28.556 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:28.817 "nvmf_tgt_2" 00:13:28.817 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:28.817 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:28.817 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:28.817 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:28.817 true 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:29.078 true 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.078 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.078 rmmod nvme_tcp 00:13:29.078 rmmod nvme_fabrics 00:13:29.078 rmmod nvme_keyring 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 178583 ']' 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 178583 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 178583 ']' 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 178583 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 178583 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 178583' 00:13:29.339 killing process with pid 178583 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 178583 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 178583 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.339 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:31.964 00:13:31.964 real 0m10.936s 00:13:31.964 user 0m9.169s 00:13:31.964 sys 0m5.596s 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:31.964 ************************************ 00:13:31.964 END TEST nvmf_multitarget 00:13:31.964 ************************************ 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.964 ************************************ 00:13:31.964 START TEST nvmf_rpc 00:13:31.964 ************************************ 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:31.964 * Looking for test storage... 00:13:31.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:31.964 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:38.566 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:38.566 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.566 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:38.567 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:38.567 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:38.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:13:38.567 00:13:38.567 --- 10.0.0.2 ping statistics --- 00:13:38.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.567 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:13:38.567 00:13:38.567 --- 10.0.0.1 ping statistics --- 00:13:38.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.567 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:38.567 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:38.828 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:38.828 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.828 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:38.828 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.828 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=183133 00:13:38.828 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 183133 00:13:38.829 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:38.829 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 183133 ']' 00:13:38.829 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.829 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.829 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.829 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.829 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.829 [2024-07-25 15:09:30.838760] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:38.829 [2024-07-25 15:09:30.838825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.829 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.829 [2024-07-25 15:09:30.909378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.829 [2024-07-25 15:09:30.984642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.829 [2024-07-25 15:09:30.984682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.829 [2024-07-25 15:09:30.984690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.829 [2024-07-25 15:09:30.984696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.829 [2024-07-25 15:09:30.984702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.829 [2024-07-25 15:09:30.984878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.829 [2024-07-25 15:09:30.984992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.829 [2024-07-25 15:09:30.985149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.829 [2024-07-25 15:09:30.985150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:39.774 "tick_rate": 2400000000, 00:13:39.774 "poll_groups": [ 00:13:39.774 { 00:13:39.774 "name": "nvmf_tgt_poll_group_000", 00:13:39.774 "admin_qpairs": 0, 00:13:39.774 "io_qpairs": 0, 00:13:39.774 "current_admin_qpairs": 0, 00:13:39.774 "current_io_qpairs": 0, 00:13:39.774 "pending_bdev_io": 0, 00:13:39.774 "completed_nvme_io": 0, 00:13:39.774 "transports": [] 00:13:39.774 }, 00:13:39.774 { 00:13:39.774 "name": "nvmf_tgt_poll_group_001", 00:13:39.774 "admin_qpairs": 0, 00:13:39.774 "io_qpairs": 0, 00:13:39.774 "current_admin_qpairs": 0, 00:13:39.774 "current_io_qpairs": 0, 00:13:39.774 "pending_bdev_io": 0, 00:13:39.774 "completed_nvme_io": 0, 00:13:39.774 "transports": [] 00:13:39.774 }, 00:13:39.774 { 00:13:39.774 "name": "nvmf_tgt_poll_group_002", 00:13:39.774 "admin_qpairs": 0, 00:13:39.774 "io_qpairs": 0, 00:13:39.774 "current_admin_qpairs": 0, 00:13:39.774 "current_io_qpairs": 0, 00:13:39.774 "pending_bdev_io": 0, 00:13:39.774 "completed_nvme_io": 0, 00:13:39.774 "transports": [] 00:13:39.774 }, 00:13:39.774 { 00:13:39.774 "name": "nvmf_tgt_poll_group_003", 00:13:39.774 "admin_qpairs": 0, 00:13:39.774 "io_qpairs": 0, 00:13:39.774 "current_admin_qpairs": 0, 00:13:39.774 "current_io_qpairs": 0, 00:13:39.774 "pending_bdev_io": 0, 00:13:39.774 "completed_nvme_io": 0, 00:13:39.774 "transports": [] 00:13:39.774 } 00:13:39.774 ] 00:13:39.774 }' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.774 [2024-07-25 15:09:31.782587] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:39.774 "tick_rate": 2400000000, 00:13:39.774 "poll_groups": [ 00:13:39.774 { 00:13:39.774 "name": "nvmf_tgt_poll_group_000", 00:13:39.774 "admin_qpairs": 0, 00:13:39.774 "io_qpairs": 0, 00:13:39.774 "current_admin_qpairs": 0, 00:13:39.774 "current_io_qpairs": 0, 00:13:39.774 "pending_bdev_io": 0, 00:13:39.774 "completed_nvme_io": 0, 00:13:39.774 "transports": [ 00:13:39.774 { 00:13:39.774 "trtype": "TCP" 00:13:39.774 } 00:13:39.774 ] 00:13:39.774 }, 00:13:39.774 { 00:13:39.774 "name": "nvmf_tgt_poll_group_001", 00:13:39.774 "admin_qpairs": 0, 00:13:39.774 "io_qpairs": 0, 00:13:39.774 "current_admin_qpairs": 0, 00:13:39.774 "current_io_qpairs": 0, 00:13:39.774 "pending_bdev_io": 0, 00:13:39.774 "completed_nvme_io": 0, 00:13:39.774 "transports": [ 00:13:39.774 { 00:13:39.774 "trtype": "TCP" 00:13:39.774 } 00:13:39.774 ] 00:13:39.774 }, 00:13:39.774 { 00:13:39.774 "name": "nvmf_tgt_poll_group_002", 00:13:39.774 "admin_qpairs": 0, 00:13:39.774 "io_qpairs": 0, 00:13:39.774 "current_admin_qpairs": 0, 00:13:39.774 "current_io_qpairs": 0, 00:13:39.774 "pending_bdev_io": 0, 00:13:39.774 "completed_nvme_io": 0, 00:13:39.774 "transports": [ 00:13:39.774 { 00:13:39.774 "trtype": "TCP" 00:13:39.774 } 00:13:39.774 ] 00:13:39.774 }, 00:13:39.774 { 00:13:39.774 "name": "nvmf_tgt_poll_group_003", 00:13:39.774 "admin_qpairs": 0, 00:13:39.774 "io_qpairs": 0, 00:13:39.774 "current_admin_qpairs": 0, 00:13:39.774 "current_io_qpairs": 0, 00:13:39.774 "pending_bdev_io": 0, 00:13:39.774 "completed_nvme_io": 0, 00:13:39.774 "transports": [ 00:13:39.774 { 00:13:39.774 "trtype": "TCP" 00:13:39.774 } 00:13:39.774 ] 00:13:39.774 } 00:13:39.774 ] 00:13:39.774 }' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.774 Malloc1 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.774 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:39.775 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.775 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.036 [2024-07-25 15:09:31.974396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:40.036 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:40.036 [2024-07-25 15:09:32.001304] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:40.036 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:40.036 could not add new controller: failed to write to nvme-fabrics device 00:13:40.036 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:40.036 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.036 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.036 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.036 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:40.036 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.036 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.036 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.036 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.423 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.423 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:41.423 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.423 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:41.423 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:43.339 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:43.600 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.601 [2024-07-25 15:09:35.728686] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:43.601 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:43.601 could not add new controller: failed to write to nvme-fabrics device 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.601 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.518 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.518 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.518 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.518 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:45.518 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:47.433 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.434 [2024-07-25 15:09:39.494715] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.434 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.349 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.349 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:49.349 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.349 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:49.349 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.265 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.266 [2024-07-25 15:09:43.269834] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.266 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:53.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:53.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:53.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.135 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.135 [2024-07-25 15:09:47.042274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.135 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.520 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:56.520 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:56.520 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.520 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:56.520 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:58.433 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:58.433 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:58.433 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.433 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:58.433 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.433 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:58.433 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.694 [2024-07-25 15:09:50.782057] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.694 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.609 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.609 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:00.609 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.609 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:00.609 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.523 [2024-07-25 15:09:54.518445] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.523 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.433 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.433 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:04.433 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.433 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:04.433 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 [2024-07-25 15:09:58.293701] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 [2024-07-25 15:09:58.353844] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.347 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 [2024-07-25 15:09:58.418031] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 [2024-07-25 15:09:58.478243] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.348 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.609 [2024-07-25 15:09:58.542437] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.609 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:06.609 "tick_rate": 2400000000, 00:14:06.609 "poll_groups": [ 00:14:06.609 { 00:14:06.609 "name": "nvmf_tgt_poll_group_000", 00:14:06.609 "admin_qpairs": 0, 00:14:06.609 "io_qpairs": 224, 00:14:06.609 "current_admin_qpairs": 0, 00:14:06.609 "current_io_qpairs": 0, 00:14:06.609 "pending_bdev_io": 0, 00:14:06.609 "completed_nvme_io": 469, 00:14:06.609 "transports": [ 00:14:06.609 { 00:14:06.609 "trtype": "TCP" 00:14:06.609 } 00:14:06.609 ] 00:14:06.609 }, 00:14:06.609 { 00:14:06.609 "name": "nvmf_tgt_poll_group_001", 00:14:06.609 "admin_qpairs": 1, 00:14:06.609 "io_qpairs": 223, 00:14:06.609 "current_admin_qpairs": 0, 00:14:06.609 "current_io_qpairs": 0, 00:14:06.609 "pending_bdev_io": 0, 00:14:06.609 "completed_nvme_io": 276, 00:14:06.609 "transports": [ 00:14:06.609 { 00:14:06.609 "trtype": "TCP" 00:14:06.609 } 00:14:06.609 ] 00:14:06.609 }, 00:14:06.609 { 00:14:06.609 "name": "nvmf_tgt_poll_group_002", 00:14:06.609 "admin_qpairs": 6, 00:14:06.609 "io_qpairs": 218, 00:14:06.609 "current_admin_qpairs": 0, 00:14:06.609 "current_io_qpairs": 0, 00:14:06.609 "pending_bdev_io": 0, 00:14:06.609 "completed_nvme_io": 221, 00:14:06.609 "transports": [ 00:14:06.609 { 00:14:06.609 "trtype": "TCP" 00:14:06.609 } 00:14:06.609 ] 00:14:06.609 }, 00:14:06.609 { 00:14:06.609 "name": "nvmf_tgt_poll_group_003", 00:14:06.609 "admin_qpairs": 0, 00:14:06.609 "io_qpairs": 224, 00:14:06.609 "current_admin_qpairs": 0, 00:14:06.610 "current_io_qpairs": 0, 00:14:06.610 "pending_bdev_io": 0, 00:14:06.610 "completed_nvme_io": 273, 00:14:06.610 "transports": [ 00:14:06.610 { 00:14:06.610 "trtype": "TCP" 00:14:06.610 } 00:14:06.610 ] 00:14:06.610 } 00:14:06.610 ] 00:14:06.610 }' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.610 rmmod nvme_tcp 00:14:06.610 rmmod nvme_fabrics 00:14:06.610 rmmod nvme_keyring 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 183133 ']' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 183133 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 183133 ']' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 183133 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.610 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 183133 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 183133' 00:14:06.870 killing process with pid 183133 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 183133 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 183133 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.870 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.415 00:14:09.415 real 0m37.441s 00:14:09.415 user 1m53.873s 00:14:09.415 sys 0m7.082s 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.415 ************************************ 00:14:09.415 END TEST nvmf_rpc 00:14:09.415 ************************************ 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.415 ************************************ 00:14:09.415 START TEST nvmf_invalid 00:14:09.415 ************************************ 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:09.415 * Looking for test storage... 00:14:09.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.415 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:16.020 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:16.020 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:16.020 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.020 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:16.021 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.021 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:14:16.283 00:14:16.283 --- 10.0.0.2 ping statistics --- 00:14:16.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.283 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:14:16.283 00:14:16.283 --- 10.0.0.1 ping statistics --- 00:14:16.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.283 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=192753 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 192753 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 192753 ']' 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.283 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:16.283 [2024-07-25 15:10:08.466440] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:16.283 [2024-07-25 15:10:08.466505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.563 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.563 [2024-07-25 15:10:08.539780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.563 [2024-07-25 15:10:08.615610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.563 [2024-07-25 15:10:08.615650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.563 [2024-07-25 15:10:08.615658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.563 [2024-07-25 15:10:08.615668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.563 [2024-07-25 15:10:08.615674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.563 [2024-07-25 15:10:08.615820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.563 [2024-07-25 15:10:08.615940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.563 [2024-07-25 15:10:08.616099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.563 [2024-07-25 15:10:08.616100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.157 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.157 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:14:17.157 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.157 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:17.157 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:17.157 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.157 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:17.157 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31839 00:14:17.418 [2024-07-25 15:10:09.438543] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:17.418 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:17.418 { 00:14:17.418 "nqn": "nqn.2016-06.io.spdk:cnode31839", 00:14:17.418 "tgt_name": "foobar", 00:14:17.418 "method": "nvmf_create_subsystem", 00:14:17.418 "req_id": 1 00:14:17.418 } 00:14:17.418 Got JSON-RPC error response 00:14:17.418 response: 00:14:17.418 { 00:14:17.418 "code": -32603, 00:14:17.418 "message": "Unable to find target foobar" 00:14:17.418 }' 00:14:17.418 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:17.418 { 00:14:17.418 "nqn": "nqn.2016-06.io.spdk:cnode31839", 00:14:17.418 "tgt_name": "foobar", 00:14:17.418 "method": "nvmf_create_subsystem", 00:14:17.418 "req_id": 1 00:14:17.418 } 00:14:17.418 Got JSON-RPC error response 00:14:17.418 response: 00:14:17.418 { 00:14:17.418 "code": -32603, 00:14:17.418 "message": "Unable to find target foobar" 00:14:17.418 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:17.418 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:17.418 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23510 00:14:17.679 [2024-07-25 15:10:09.611128] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23510: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:17.679 { 00:14:17.679 "nqn": "nqn.2016-06.io.spdk:cnode23510", 00:14:17.679 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:17.679 "method": "nvmf_create_subsystem", 00:14:17.679 "req_id": 1 00:14:17.679 } 00:14:17.679 Got JSON-RPC error response 00:14:17.679 response: 00:14:17.679 { 00:14:17.679 "code": -32602, 00:14:17.679 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:17.679 }' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:17.679 { 00:14:17.679 "nqn": "nqn.2016-06.io.spdk:cnode23510", 00:14:17.679 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:17.679 "method": "nvmf_create_subsystem", 00:14:17.679 "req_id": 1 00:14:17.679 } 00:14:17.679 Got JSON-RPC error response 00:14:17.679 response: 00:14:17.679 { 00:14:17.679 "code": -32602, 00:14:17.679 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:17.679 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25642 00:14:17.679 [2024-07-25 15:10:09.783736] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25642: invalid model number 'SPDK_Controller' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:17.679 { 00:14:17.679 "nqn": "nqn.2016-06.io.spdk:cnode25642", 00:14:17.679 "model_number": "SPDK_Controller\u001f", 00:14:17.679 "method": "nvmf_create_subsystem", 00:14:17.679 "req_id": 1 00:14:17.679 } 00:14:17.679 Got JSON-RPC error response 00:14:17.679 response: 00:14:17.679 { 00:14:17.679 "code": -32602, 00:14:17.679 "message": "Invalid MN SPDK_Controller\u001f" 00:14:17.679 }' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:17.679 { 00:14:17.679 "nqn": "nqn.2016-06.io.spdk:cnode25642", 00:14:17.679 "model_number": "SPDK_Controller\u001f", 00:14:17.679 "method": "nvmf_create_subsystem", 00:14:17.679 "req_id": 1 00:14:17.679 } 00:14:17.679 Got JSON-RPC error response 00:14:17.679 response: 00:14:17.679 { 00:14:17.679 "code": -32602, 00:14:17.679 "message": "Invalid MN SPDK_Controller\u001f" 00:14:17.679 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.679 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.680 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:17.680 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:17.941 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:17.941 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.941 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.941 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:17.941 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:17.941 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ L == \- ]] 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Lxtcl!)W,Ty`aGy.paN96' 00:14:17.942 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Lxtcl!)W,Ty`aGy.paN96' nqn.2016-06.io.spdk:cnode5122 00:14:17.942 [2024-07-25 15:10:10.116781] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5122: invalid serial number 'Lxtcl!)W,Ty`aGy.paN96' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:18.205 { 00:14:18.205 "nqn": "nqn.2016-06.io.spdk:cnode5122", 00:14:18.205 "serial_number": "Lxtcl!)W,Ty`aGy.paN96", 00:14:18.205 "method": "nvmf_create_subsystem", 00:14:18.205 "req_id": 1 00:14:18.205 } 00:14:18.205 Got JSON-RPC error response 00:14:18.205 response: 00:14:18.205 { 00:14:18.205 "code": -32602, 00:14:18.205 "message": "Invalid SN Lxtcl!)W,Ty`aGy.paN96" 00:14:18.205 }' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:18.205 { 00:14:18.205 "nqn": "nqn.2016-06.io.spdk:cnode5122", 00:14:18.205 "serial_number": "Lxtcl!)W,Ty`aGy.paN96", 00:14:18.205 "method": "nvmf_create_subsystem", 00:14:18.205 "req_id": 1 00:14:18.205 } 00:14:18.205 Got JSON-RPC error response 00:14:18.205 response: 00:14:18.205 { 00:14:18.205 "code": -32602, 00:14:18.205 "message": "Invalid SN Lxtcl!)W,Ty`aGy.paN96" 00:14:18.205 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:18.205 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:18.206 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:18.207 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:18.207 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.207 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.207 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:18.207 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:18.207 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:18.207 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.207 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.207 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '0X4uy7!.r|BppK)d=N|:`Y\7K`U#m!F,*\r_cr;83' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '0X4uy7!.r|BppK)d=N|:`Y\7K`U#m!F,*\r_cr;83' nqn.2016-06.io.spdk:cnode10705 00:14:18.468 [2024-07-25 15:10:10.598322] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10705: invalid model number '0X4uy7!.r|BppK)d=N|:`Y\7K`U#m!F,*\r_cr;83' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:18.468 { 00:14:18.468 "nqn": "nqn.2016-06.io.spdk:cnode10705", 00:14:18.468 "model_number": "0X4uy7!.r|BppK)d=N|:`Y\\7K`U#m!F,*\\r_cr;83", 00:14:18.468 "method": "nvmf_create_subsystem", 00:14:18.468 "req_id": 1 00:14:18.468 } 00:14:18.468 Got JSON-RPC error response 00:14:18.468 response: 00:14:18.468 { 00:14:18.468 "code": -32602, 00:14:18.468 "message": "Invalid MN 0X4uy7!.r|BppK)d=N|:`Y\\7K`U#m!F,*\\r_cr;83" 00:14:18.468 }' 00:14:18.468 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:18.468 { 00:14:18.468 "nqn": "nqn.2016-06.io.spdk:cnode10705", 00:14:18.468 "model_number": "0X4uy7!.r|BppK)d=N|:`Y\\7K`U#m!F,*\\r_cr;83", 00:14:18.468 "method": "nvmf_create_subsystem", 00:14:18.468 "req_id": 1 00:14:18.468 } 00:14:18.468 Got JSON-RPC error response 00:14:18.468 response: 00:14:18.468 { 00:14:18.469 "code": -32602, 00:14:18.469 "message": "Invalid MN 0X4uy7!.r|BppK)d=N|:`Y\\7K`U#m!F,*\\r_cr;83" 00:14:18.469 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:18.469 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:18.728 [2024-07-25 15:10:10.770968] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.728 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:18.990 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:18.990 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:18.990 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:18.990 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:18.990 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:18.990 [2024-07-25 15:10:11.124053] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:18.990 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:18.990 { 00:14:18.990 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:18.990 "listen_address": { 00:14:18.990 "trtype": "tcp", 00:14:18.990 "traddr": "", 00:14:18.990 "trsvcid": "4421" 00:14:18.990 }, 00:14:18.990 "method": "nvmf_subsystem_remove_listener", 00:14:18.990 "req_id": 1 00:14:18.990 } 00:14:18.990 Got JSON-RPC error response 00:14:18.990 response: 00:14:18.990 { 00:14:18.990 "code": -32602, 00:14:18.990 "message": "Invalid parameters" 00:14:18.990 }' 00:14:18.990 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:18.990 { 00:14:18.990 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:18.990 "listen_address": { 00:14:18.990 "trtype": "tcp", 00:14:18.990 "traddr": "", 00:14:18.990 "trsvcid": "4421" 00:14:18.990 }, 00:14:18.990 "method": "nvmf_subsystem_remove_listener", 00:14:18.990 "req_id": 1 00:14:18.990 } 00:14:18.990 Got JSON-RPC error response 00:14:18.990 response: 00:14:18.990 { 00:14:18.990 "code": -32602, 00:14:18.990 "message": "Invalid parameters" 00:14:18.990 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:18.990 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29798 -i 0 00:14:19.252 [2024-07-25 15:10:11.296579] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29798: invalid cntlid range [0-65519] 00:14:19.252 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:19.252 { 00:14:19.252 "nqn": "nqn.2016-06.io.spdk:cnode29798", 00:14:19.252 "min_cntlid": 0, 00:14:19.252 "method": "nvmf_create_subsystem", 00:14:19.252 "req_id": 1 00:14:19.252 } 00:14:19.252 Got JSON-RPC error response 00:14:19.252 response: 00:14:19.252 { 00:14:19.252 "code": -32602, 00:14:19.252 "message": "Invalid cntlid range [0-65519]" 00:14:19.252 }' 00:14:19.252 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:19.252 { 00:14:19.252 "nqn": "nqn.2016-06.io.spdk:cnode29798", 00:14:19.252 "min_cntlid": 0, 00:14:19.252 "method": "nvmf_create_subsystem", 00:14:19.252 "req_id": 1 00:14:19.252 } 00:14:19.252 Got JSON-RPC error response 00:14:19.252 response: 00:14:19.252 { 00:14:19.252 "code": -32602, 00:14:19.252 "message": "Invalid cntlid range [0-65519]" 00:14:19.252 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.252 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31521 -i 65520 00:14:19.513 [2024-07-25 15:10:11.473144] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31521: invalid cntlid range [65520-65519] 00:14:19.513 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:19.513 { 00:14:19.513 "nqn": "nqn.2016-06.io.spdk:cnode31521", 00:14:19.513 "min_cntlid": 65520, 00:14:19.513 "method": "nvmf_create_subsystem", 00:14:19.513 "req_id": 1 00:14:19.513 } 00:14:19.513 Got JSON-RPC error response 00:14:19.513 response: 00:14:19.513 { 00:14:19.513 "code": -32602, 00:14:19.513 "message": "Invalid cntlid range [65520-65519]" 00:14:19.513 }' 00:14:19.513 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:19.513 { 00:14:19.513 "nqn": "nqn.2016-06.io.spdk:cnode31521", 00:14:19.513 "min_cntlid": 65520, 00:14:19.513 "method": "nvmf_create_subsystem", 00:14:19.513 "req_id": 1 00:14:19.513 } 00:14:19.513 Got JSON-RPC error response 00:14:19.513 response: 00:14:19.513 { 00:14:19.513 "code": -32602, 00:14:19.513 "message": "Invalid cntlid range [65520-65519]" 00:14:19.513 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.513 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20889 -I 0 00:14:19.513 [2024-07-25 15:10:11.645699] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20889: invalid cntlid range [1-0] 00:14:19.513 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:19.513 { 00:14:19.513 "nqn": "nqn.2016-06.io.spdk:cnode20889", 00:14:19.513 "max_cntlid": 0, 00:14:19.513 "method": "nvmf_create_subsystem", 00:14:19.513 "req_id": 1 00:14:19.513 } 00:14:19.513 Got JSON-RPC error response 00:14:19.513 response: 00:14:19.513 { 00:14:19.513 "code": -32602, 00:14:19.513 "message": "Invalid cntlid range [1-0]" 00:14:19.513 }' 00:14:19.513 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:19.513 { 00:14:19.513 "nqn": "nqn.2016-06.io.spdk:cnode20889", 00:14:19.513 "max_cntlid": 0, 00:14:19.513 "method": "nvmf_create_subsystem", 00:14:19.513 "req_id": 1 00:14:19.513 } 00:14:19.513 Got JSON-RPC error response 00:14:19.513 response: 00:14:19.513 { 00:14:19.513 "code": -32602, 00:14:19.513 "message": "Invalid cntlid range [1-0]" 00:14:19.513 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.513 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15503 -I 65520 00:14:19.775 [2024-07-25 15:10:11.818239] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15503: invalid cntlid range [1-65520] 00:14:19.775 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:19.775 { 00:14:19.775 "nqn": "nqn.2016-06.io.spdk:cnode15503", 00:14:19.775 "max_cntlid": 65520, 00:14:19.775 "method": "nvmf_create_subsystem", 00:14:19.775 "req_id": 1 00:14:19.775 } 00:14:19.775 Got JSON-RPC error response 00:14:19.775 response: 00:14:19.775 { 00:14:19.775 "code": -32602, 00:14:19.775 "message": "Invalid cntlid range [1-65520]" 00:14:19.775 }' 00:14:19.775 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:19.775 { 00:14:19.775 "nqn": "nqn.2016-06.io.spdk:cnode15503", 00:14:19.775 "max_cntlid": 65520, 00:14:19.775 "method": "nvmf_create_subsystem", 00:14:19.775 "req_id": 1 00:14:19.775 } 00:14:19.775 Got JSON-RPC error response 00:14:19.775 response: 00:14:19.775 { 00:14:19.775 "code": -32602, 00:14:19.775 "message": "Invalid cntlid range [1-65520]" 00:14:19.775 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.775 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30570 -i 6 -I 5 00:14:20.036 [2024-07-25 15:10:11.990793] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30570: invalid cntlid range [6-5] 00:14:20.036 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:20.036 { 00:14:20.036 "nqn": "nqn.2016-06.io.spdk:cnode30570", 00:14:20.036 "min_cntlid": 6, 00:14:20.036 "max_cntlid": 5, 00:14:20.036 "method": "nvmf_create_subsystem", 00:14:20.036 "req_id": 1 00:14:20.036 } 00:14:20.036 Got JSON-RPC error response 00:14:20.036 response: 00:14:20.036 { 00:14:20.036 "code": -32602, 00:14:20.036 "message": "Invalid cntlid range [6-5]" 00:14:20.036 }' 00:14:20.036 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:20.037 { 00:14:20.037 "nqn": "nqn.2016-06.io.spdk:cnode30570", 00:14:20.037 "min_cntlid": 6, 00:14:20.037 "max_cntlid": 5, 00:14:20.037 "method": "nvmf_create_subsystem", 00:14:20.037 "req_id": 1 00:14:20.037 } 00:14:20.037 Got JSON-RPC error response 00:14:20.037 response: 00:14:20.037 { 00:14:20.037 "code": -32602, 00:14:20.037 "message": "Invalid cntlid range [6-5]" 00:14:20.037 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:20.037 { 00:14:20.037 "name": "foobar", 00:14:20.037 "method": "nvmf_delete_target", 00:14:20.037 "req_id": 1 00:14:20.037 } 00:14:20.037 Got JSON-RPC error response 00:14:20.037 response: 00:14:20.037 { 00:14:20.037 "code": -32602, 00:14:20.037 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:20.037 }' 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:20.037 { 00:14:20.037 "name": "foobar", 00:14:20.037 "method": "nvmf_delete_target", 00:14:20.037 "req_id": 1 00:14:20.037 } 00:14:20.037 Got JSON-RPC error response 00:14:20.037 response: 00:14:20.037 { 00:14:20.037 "code": -32602, 00:14:20.037 "message": "The specified target doesn't exist, cannot delete it." 00:14:20.037 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.037 rmmod nvme_tcp 00:14:20.037 rmmod nvme_fabrics 00:14:20.037 rmmod nvme_keyring 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 192753 ']' 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 192753 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 192753 ']' 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 192753 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.037 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 192753 00:14:20.298 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:20.298 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:20.298 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 192753' 00:14:20.298 killing process with pid 192753 00:14:20.298 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 192753 00:14:20.298 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 192753 00:14:20.298 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.298 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.299 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.299 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.299 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.299 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.299 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.299 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:22.851 00:14:22.851 real 0m13.325s 00:14:22.851 user 0m19.206s 00:14:22.851 sys 0m6.303s 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:22.851 ************************************ 00:14:22.851 END TEST nvmf_invalid 00:14:22.851 ************************************ 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.851 ************************************ 00:14:22.851 START TEST nvmf_connect_stress 00:14:22.851 ************************************ 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:22.851 * Looking for test storage... 00:14:22.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.851 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.852 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:29.446 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:29.447 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:29.447 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:29.447 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:29.447 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.447 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:14:29.709 00:14:29.709 --- 10.0.0.2 ping statistics --- 00:14:29.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.709 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:14:29.709 00:14:29.709 --- 10.0.0.1 ping statistics --- 00:14:29.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.709 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=197825 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 197825 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 197825 ']' 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.709 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.709 [2024-07-25 15:10:21.851459] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:29.709 [2024-07-25 15:10:21.851508] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.709 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.970 [2024-07-25 15:10:21.934277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:29.970 [2024-07-25 15:10:22.011435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.970 [2024-07-25 15:10:22.011488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.970 [2024-07-25 15:10:22.011495] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.970 [2024-07-25 15:10:22.011503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.970 [2024-07-25 15:10:22.011509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.970 [2024-07-25 15:10:22.011637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.970 [2024-07-25 15:10:22.011805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.970 [2024-07-25 15:10:22.011806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.543 [2024-07-25 15:10:22.678982] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.543 [2024-07-25 15:10:22.714153] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.543 NULL1 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=198146 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:30.543 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.804 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.805 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.066 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.066 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:31.066 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.066 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.066 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.327 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.327 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:31.327 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.327 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.327 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.897 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.897 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:31.897 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.897 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.897 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.159 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.159 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:32.159 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.159 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.159 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.420 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.420 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:32.420 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.420 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.420 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.681 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.681 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:32.681 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.681 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.681 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.942 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.942 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:32.943 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.943 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.943 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.514 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.514 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:33.514 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.514 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.514 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.775 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.775 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:33.775 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.775 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.775 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.037 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.037 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:34.037 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.037 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.037 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.299 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.299 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:34.299 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.299 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.299 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.559 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.559 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:34.559 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.559 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.559 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.131 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.131 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:35.131 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.131 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.131 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.392 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.392 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:35.392 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.392 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.392 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.719 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.719 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:35.719 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.719 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.719 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.980 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.980 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:35.980 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.980 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.980 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.241 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.241 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:36.241 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.241 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.241 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.502 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.502 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:36.502 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.502 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.502 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.074 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.074 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:37.074 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.074 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.074 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.335 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.335 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:37.335 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.335 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.335 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.595 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.595 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:37.595 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.595 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.595 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.855 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.855 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:37.855 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.855 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.855 15:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.428 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.428 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:38.428 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.428 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.428 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.690 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.690 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:38.690 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.690 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.690 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.949 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.949 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:38.949 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.949 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.949 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.209 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.209 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:39.210 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.210 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.210 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.470 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.470 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:39.470 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.470 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.470 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.040 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.040 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:40.040 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.040 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.040 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.301 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.301 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:40.301 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.301 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.301 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.562 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.562 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:40.562 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.562 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.562 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.823 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 198146 00:14:40.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (198146) - No such process 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 198146 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.823 rmmod nvme_tcp 00:14:40.823 rmmod nvme_fabrics 00:14:40.823 rmmod nvme_keyring 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 197825 ']' 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 197825 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 197825 ']' 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 197825 00:14:40.823 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:40.823 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.824 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 197825 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 197825' 00:14:41.085 killing process with pid 197825 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 197825 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 197825 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.085 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.630 00:14:43.630 real 0m20.699s 00:14:43.630 user 0m42.206s 00:14:43.630 sys 0m8.449s 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.630 ************************************ 00:14:43.630 END TEST nvmf_connect_stress 00:14:43.630 ************************************ 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.630 ************************************ 00:14:43.630 START TEST nvmf_fused_ordering 00:14:43.630 ************************************ 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:43.630 * Looking for test storage... 00:14:43.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.630 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.631 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.222 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.222 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.222 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:50.223 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:50.223 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:50.223 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:50.223 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.223 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.223 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:50.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:14:50.223 00:14:50.223 --- 10.0.0.2 ping statistics --- 00:14:50.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.223 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:14:50.223 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:14:50.223 00:14:50.223 --- 10.0.0.1 ping statistics --- 00:14:50.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.224 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=204194 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 204194 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 204194 ']' 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.224 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:50.224 [2024-07-25 15:10:42.124347] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:50.224 [2024-07-25 15:10:42.124399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.224 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.224 [2024-07-25 15:10:42.206178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.224 [2024-07-25 15:10:42.291548] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.224 [2024-07-25 15:10:42.291604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.224 [2024-07-25 15:10:42.291613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.224 [2024-07-25 15:10:42.291620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.224 [2024-07-25 15:10:42.291626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.224 [2024-07-25 15:10:42.291650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.797 [2024-07-25 15:10:42.947232] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.797 [2024-07-25 15:10:42.963486] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.797 NULL1 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.797 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.060 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:51.060 [2024-07-25 15:10:43.020531] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:51.060 [2024-07-25 15:10:43.020574] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204247 ] 00:14:51.060 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.633 Attached to nqn.2016-06.io.spdk:cnode1 00:14:51.633 Namespace ID: 1 size: 1GB 00:14:51.633 fused_ordering(0) 00:14:51.633 fused_ordering(1) 00:14:51.633 fused_ordering(2) 00:14:51.633 fused_ordering(3) 00:14:51.633 fused_ordering(4) 00:14:51.633 fused_ordering(5) 00:14:51.633 fused_ordering(6) 00:14:51.633 fused_ordering(7) 00:14:51.633 fused_ordering(8) 00:14:51.633 fused_ordering(9) 00:14:51.633 fused_ordering(10) 00:14:51.633 fused_ordering(11) 00:14:51.633 fused_ordering(12) 00:14:51.633 fused_ordering(13) 00:14:51.633 fused_ordering(14) 00:14:51.633 fused_ordering(15) 00:14:51.633 fused_ordering(16) 00:14:51.633 fused_ordering(17) 00:14:51.633 fused_ordering(18) 00:14:51.633 fused_ordering(19) 00:14:51.633 fused_ordering(20) 00:14:51.633 fused_ordering(21) 00:14:51.633 fused_ordering(22) 00:14:51.633 fused_ordering(23) 00:14:51.633 fused_ordering(24) 00:14:51.633 fused_ordering(25) 00:14:51.633 fused_ordering(26) 00:14:51.633 fused_ordering(27) 00:14:51.633 fused_ordering(28) 00:14:51.633 fused_ordering(29) 00:14:51.633 fused_ordering(30) 00:14:51.633 fused_ordering(31) 00:14:51.633 fused_ordering(32) 00:14:51.633 fused_ordering(33) 00:14:51.633 fused_ordering(34) 00:14:51.633 fused_ordering(35) 00:14:51.633 fused_ordering(36) 00:14:51.633 fused_ordering(37) 00:14:51.633 fused_ordering(38) 00:14:51.633 fused_ordering(39) 00:14:51.633 fused_ordering(40) 00:14:51.633 fused_ordering(41) 00:14:51.633 fused_ordering(42) 00:14:51.633 fused_ordering(43) 00:14:51.633 fused_ordering(44) 00:14:51.633 fused_ordering(45) 00:14:51.633 fused_ordering(46) 00:14:51.633 fused_ordering(47) 00:14:51.633 fused_ordering(48) 00:14:51.633 fused_ordering(49) 00:14:51.633 fused_ordering(50) 00:14:51.633 fused_ordering(51) 00:14:51.633 fused_ordering(52) 00:14:51.633 fused_ordering(53) 00:14:51.633 fused_ordering(54) 00:14:51.633 fused_ordering(55) 00:14:51.633 fused_ordering(56) 00:14:51.633 fused_ordering(57) 00:14:51.633 fused_ordering(58) 00:14:51.633 fused_ordering(59) 00:14:51.633 fused_ordering(60) 00:14:51.633 fused_ordering(61) 00:14:51.633 fused_ordering(62) 00:14:51.633 fused_ordering(63) 00:14:51.633 fused_ordering(64) 00:14:51.633 fused_ordering(65) 00:14:51.633 fused_ordering(66) 00:14:51.633 fused_ordering(67) 00:14:51.633 fused_ordering(68) 00:14:51.633 fused_ordering(69) 00:14:51.633 fused_ordering(70) 00:14:51.633 fused_ordering(71) 00:14:51.633 fused_ordering(72) 00:14:51.633 fused_ordering(73) 00:14:51.633 fused_ordering(74) 00:14:51.633 fused_ordering(75) 00:14:51.633 fused_ordering(76) 00:14:51.633 fused_ordering(77) 00:14:51.633 fused_ordering(78) 00:14:51.633 fused_ordering(79) 00:14:51.633 fused_ordering(80) 00:14:51.633 fused_ordering(81) 00:14:51.633 fused_ordering(82) 00:14:51.633 fused_ordering(83) 00:14:51.633 fused_ordering(84) 00:14:51.633 fused_ordering(85) 00:14:51.633 fused_ordering(86) 00:14:51.633 fused_ordering(87) 00:14:51.633 fused_ordering(88) 00:14:51.633 fused_ordering(89) 00:14:51.633 fused_ordering(90) 00:14:51.633 fused_ordering(91) 00:14:51.633 fused_ordering(92) 00:14:51.633 fused_ordering(93) 00:14:51.633 fused_ordering(94) 00:14:51.633 fused_ordering(95) 00:14:51.633 fused_ordering(96) 00:14:51.633 fused_ordering(97) 00:14:51.633 fused_ordering(98) 00:14:51.633 fused_ordering(99) 00:14:51.633 fused_ordering(100) 00:14:51.633 fused_ordering(101) 00:14:51.633 fused_ordering(102) 00:14:51.633 fused_ordering(103) 00:14:51.633 fused_ordering(104) 00:14:51.633 fused_ordering(105) 00:14:51.633 fused_ordering(106) 00:14:51.633 fused_ordering(107) 00:14:51.633 fused_ordering(108) 00:14:51.633 fused_ordering(109) 00:14:51.633 fused_ordering(110) 00:14:51.633 fused_ordering(111) 00:14:51.633 fused_ordering(112) 00:14:51.633 fused_ordering(113) 00:14:51.633 fused_ordering(114) 00:14:51.633 fused_ordering(115) 00:14:51.633 fused_ordering(116) 00:14:51.633 fused_ordering(117) 00:14:51.633 fused_ordering(118) 00:14:51.633 fused_ordering(119) 00:14:51.633 fused_ordering(120) 00:14:51.633 fused_ordering(121) 00:14:51.633 fused_ordering(122) 00:14:51.633 fused_ordering(123) 00:14:51.633 fused_ordering(124) 00:14:51.633 fused_ordering(125) 00:14:51.633 fused_ordering(126) 00:14:51.633 fused_ordering(127) 00:14:51.633 fused_ordering(128) 00:14:51.633 fused_ordering(129) 00:14:51.633 fused_ordering(130) 00:14:51.633 fused_ordering(131) 00:14:51.633 fused_ordering(132) 00:14:51.633 fused_ordering(133) 00:14:51.633 fused_ordering(134) 00:14:51.633 fused_ordering(135) 00:14:51.633 fused_ordering(136) 00:14:51.633 fused_ordering(137) 00:14:51.633 fused_ordering(138) 00:14:51.633 fused_ordering(139) 00:14:51.633 fused_ordering(140) 00:14:51.633 fused_ordering(141) 00:14:51.633 fused_ordering(142) 00:14:51.633 fused_ordering(143) 00:14:51.633 fused_ordering(144) 00:14:51.633 fused_ordering(145) 00:14:51.633 fused_ordering(146) 00:14:51.633 fused_ordering(147) 00:14:51.633 fused_ordering(148) 00:14:51.633 fused_ordering(149) 00:14:51.633 fused_ordering(150) 00:14:51.633 fused_ordering(151) 00:14:51.633 fused_ordering(152) 00:14:51.633 fused_ordering(153) 00:14:51.633 fused_ordering(154) 00:14:51.633 fused_ordering(155) 00:14:51.633 fused_ordering(156) 00:14:51.633 fused_ordering(157) 00:14:51.633 fused_ordering(158) 00:14:51.633 fused_ordering(159) 00:14:51.633 fused_ordering(160) 00:14:51.633 fused_ordering(161) 00:14:51.633 fused_ordering(162) 00:14:51.633 fused_ordering(163) 00:14:51.633 fused_ordering(164) 00:14:51.633 fused_ordering(165) 00:14:51.633 fused_ordering(166) 00:14:51.633 fused_ordering(167) 00:14:51.633 fused_ordering(168) 00:14:51.633 fused_ordering(169) 00:14:51.633 fused_ordering(170) 00:14:51.633 fused_ordering(171) 00:14:51.633 fused_ordering(172) 00:14:51.633 fused_ordering(173) 00:14:51.633 fused_ordering(174) 00:14:51.633 fused_ordering(175) 00:14:51.633 fused_ordering(176) 00:14:51.633 fused_ordering(177) 00:14:51.633 fused_ordering(178) 00:14:51.633 fused_ordering(179) 00:14:51.633 fused_ordering(180) 00:14:51.633 fused_ordering(181) 00:14:51.633 fused_ordering(182) 00:14:51.633 fused_ordering(183) 00:14:51.633 fused_ordering(184) 00:14:51.633 fused_ordering(185) 00:14:51.633 fused_ordering(186) 00:14:51.633 fused_ordering(187) 00:14:51.633 fused_ordering(188) 00:14:51.633 fused_ordering(189) 00:14:51.633 fused_ordering(190) 00:14:51.633 fused_ordering(191) 00:14:51.633 fused_ordering(192) 00:14:51.633 fused_ordering(193) 00:14:51.633 fused_ordering(194) 00:14:51.633 fused_ordering(195) 00:14:51.633 fused_ordering(196) 00:14:51.633 fused_ordering(197) 00:14:51.633 fused_ordering(198) 00:14:51.633 fused_ordering(199) 00:14:51.633 fused_ordering(200) 00:14:51.633 fused_ordering(201) 00:14:51.633 fused_ordering(202) 00:14:51.633 fused_ordering(203) 00:14:51.633 fused_ordering(204) 00:14:51.633 fused_ordering(205) 00:14:52.205 fused_ordering(206) 00:14:52.205 fused_ordering(207) 00:14:52.205 fused_ordering(208) 00:14:52.205 fused_ordering(209) 00:14:52.205 fused_ordering(210) 00:14:52.205 fused_ordering(211) 00:14:52.205 fused_ordering(212) 00:14:52.205 fused_ordering(213) 00:14:52.205 fused_ordering(214) 00:14:52.205 fused_ordering(215) 00:14:52.205 fused_ordering(216) 00:14:52.205 fused_ordering(217) 00:14:52.205 fused_ordering(218) 00:14:52.205 fused_ordering(219) 00:14:52.205 fused_ordering(220) 00:14:52.205 fused_ordering(221) 00:14:52.205 fused_ordering(222) 00:14:52.205 fused_ordering(223) 00:14:52.205 fused_ordering(224) 00:14:52.205 fused_ordering(225) 00:14:52.205 fused_ordering(226) 00:14:52.205 fused_ordering(227) 00:14:52.205 fused_ordering(228) 00:14:52.205 fused_ordering(229) 00:14:52.205 fused_ordering(230) 00:14:52.205 fused_ordering(231) 00:14:52.205 fused_ordering(232) 00:14:52.205 fused_ordering(233) 00:14:52.205 fused_ordering(234) 00:14:52.205 fused_ordering(235) 00:14:52.205 fused_ordering(236) 00:14:52.205 fused_ordering(237) 00:14:52.205 fused_ordering(238) 00:14:52.205 fused_ordering(239) 00:14:52.205 fused_ordering(240) 00:14:52.205 fused_ordering(241) 00:14:52.205 fused_ordering(242) 00:14:52.205 fused_ordering(243) 00:14:52.205 fused_ordering(244) 00:14:52.205 fused_ordering(245) 00:14:52.205 fused_ordering(246) 00:14:52.205 fused_ordering(247) 00:14:52.205 fused_ordering(248) 00:14:52.205 fused_ordering(249) 00:14:52.205 fused_ordering(250) 00:14:52.205 fused_ordering(251) 00:14:52.205 fused_ordering(252) 00:14:52.205 fused_ordering(253) 00:14:52.205 fused_ordering(254) 00:14:52.205 fused_ordering(255) 00:14:52.205 fused_ordering(256) 00:14:52.206 fused_ordering(257) 00:14:52.206 fused_ordering(258) 00:14:52.206 fused_ordering(259) 00:14:52.206 fused_ordering(260) 00:14:52.206 fused_ordering(261) 00:14:52.206 fused_ordering(262) 00:14:52.206 fused_ordering(263) 00:14:52.206 fused_ordering(264) 00:14:52.206 fused_ordering(265) 00:14:52.206 fused_ordering(266) 00:14:52.206 fused_ordering(267) 00:14:52.206 fused_ordering(268) 00:14:52.206 fused_ordering(269) 00:14:52.206 fused_ordering(270) 00:14:52.206 fused_ordering(271) 00:14:52.206 fused_ordering(272) 00:14:52.206 fused_ordering(273) 00:14:52.206 fused_ordering(274) 00:14:52.206 fused_ordering(275) 00:14:52.206 fused_ordering(276) 00:14:52.206 fused_ordering(277) 00:14:52.206 fused_ordering(278) 00:14:52.206 fused_ordering(279) 00:14:52.206 fused_ordering(280) 00:14:52.206 fused_ordering(281) 00:14:52.206 fused_ordering(282) 00:14:52.206 fused_ordering(283) 00:14:52.206 fused_ordering(284) 00:14:52.206 fused_ordering(285) 00:14:52.206 fused_ordering(286) 00:14:52.206 fused_ordering(287) 00:14:52.206 fused_ordering(288) 00:14:52.206 fused_ordering(289) 00:14:52.206 fused_ordering(290) 00:14:52.206 fused_ordering(291) 00:14:52.206 fused_ordering(292) 00:14:52.206 fused_ordering(293) 00:14:52.206 fused_ordering(294) 00:14:52.206 fused_ordering(295) 00:14:52.206 fused_ordering(296) 00:14:52.206 fused_ordering(297) 00:14:52.206 fused_ordering(298) 00:14:52.206 fused_ordering(299) 00:14:52.206 fused_ordering(300) 00:14:52.206 fused_ordering(301) 00:14:52.206 fused_ordering(302) 00:14:52.206 fused_ordering(303) 00:14:52.206 fused_ordering(304) 00:14:52.206 fused_ordering(305) 00:14:52.206 fused_ordering(306) 00:14:52.206 fused_ordering(307) 00:14:52.206 fused_ordering(308) 00:14:52.206 fused_ordering(309) 00:14:52.206 fused_ordering(310) 00:14:52.206 fused_ordering(311) 00:14:52.206 fused_ordering(312) 00:14:52.206 fused_ordering(313) 00:14:52.206 fused_ordering(314) 00:14:52.206 fused_ordering(315) 00:14:52.206 fused_ordering(316) 00:14:52.206 fused_ordering(317) 00:14:52.206 fused_ordering(318) 00:14:52.206 fused_ordering(319) 00:14:52.206 fused_ordering(320) 00:14:52.206 fused_ordering(321) 00:14:52.206 fused_ordering(322) 00:14:52.206 fused_ordering(323) 00:14:52.206 fused_ordering(324) 00:14:52.206 fused_ordering(325) 00:14:52.206 fused_ordering(326) 00:14:52.206 fused_ordering(327) 00:14:52.206 fused_ordering(328) 00:14:52.206 fused_ordering(329) 00:14:52.206 fused_ordering(330) 00:14:52.206 fused_ordering(331) 00:14:52.206 fused_ordering(332) 00:14:52.206 fused_ordering(333) 00:14:52.206 fused_ordering(334) 00:14:52.206 fused_ordering(335) 00:14:52.206 fused_ordering(336) 00:14:52.206 fused_ordering(337) 00:14:52.206 fused_ordering(338) 00:14:52.206 fused_ordering(339) 00:14:52.206 fused_ordering(340) 00:14:52.206 fused_ordering(341) 00:14:52.206 fused_ordering(342) 00:14:52.206 fused_ordering(343) 00:14:52.206 fused_ordering(344) 00:14:52.206 fused_ordering(345) 00:14:52.206 fused_ordering(346) 00:14:52.206 fused_ordering(347) 00:14:52.206 fused_ordering(348) 00:14:52.206 fused_ordering(349) 00:14:52.206 fused_ordering(350) 00:14:52.206 fused_ordering(351) 00:14:52.206 fused_ordering(352) 00:14:52.206 fused_ordering(353) 00:14:52.206 fused_ordering(354) 00:14:52.206 fused_ordering(355) 00:14:52.206 fused_ordering(356) 00:14:52.206 fused_ordering(357) 00:14:52.206 fused_ordering(358) 00:14:52.206 fused_ordering(359) 00:14:52.206 fused_ordering(360) 00:14:52.206 fused_ordering(361) 00:14:52.206 fused_ordering(362) 00:14:52.206 fused_ordering(363) 00:14:52.206 fused_ordering(364) 00:14:52.206 fused_ordering(365) 00:14:52.206 fused_ordering(366) 00:14:52.206 fused_ordering(367) 00:14:52.206 fused_ordering(368) 00:14:52.206 fused_ordering(369) 00:14:52.206 fused_ordering(370) 00:14:52.206 fused_ordering(371) 00:14:52.206 fused_ordering(372) 00:14:52.206 fused_ordering(373) 00:14:52.206 fused_ordering(374) 00:14:52.206 fused_ordering(375) 00:14:52.206 fused_ordering(376) 00:14:52.206 fused_ordering(377) 00:14:52.206 fused_ordering(378) 00:14:52.206 fused_ordering(379) 00:14:52.206 fused_ordering(380) 00:14:52.206 fused_ordering(381) 00:14:52.206 fused_ordering(382) 00:14:52.206 fused_ordering(383) 00:14:52.206 fused_ordering(384) 00:14:52.206 fused_ordering(385) 00:14:52.206 fused_ordering(386) 00:14:52.206 fused_ordering(387) 00:14:52.206 fused_ordering(388) 00:14:52.206 fused_ordering(389) 00:14:52.206 fused_ordering(390) 00:14:52.206 fused_ordering(391) 00:14:52.206 fused_ordering(392) 00:14:52.206 fused_ordering(393) 00:14:52.206 fused_ordering(394) 00:14:52.206 fused_ordering(395) 00:14:52.206 fused_ordering(396) 00:14:52.206 fused_ordering(397) 00:14:52.206 fused_ordering(398) 00:14:52.206 fused_ordering(399) 00:14:52.206 fused_ordering(400) 00:14:52.206 fused_ordering(401) 00:14:52.206 fused_ordering(402) 00:14:52.206 fused_ordering(403) 00:14:52.206 fused_ordering(404) 00:14:52.206 fused_ordering(405) 00:14:52.206 fused_ordering(406) 00:14:52.206 fused_ordering(407) 00:14:52.206 fused_ordering(408) 00:14:52.206 fused_ordering(409) 00:14:52.206 fused_ordering(410) 00:14:52.778 fused_ordering(411) 00:14:52.778 fused_ordering(412) 00:14:52.778 fused_ordering(413) 00:14:52.778 fused_ordering(414) 00:14:52.778 fused_ordering(415) 00:14:52.778 fused_ordering(416) 00:14:52.778 fused_ordering(417) 00:14:52.778 fused_ordering(418) 00:14:52.778 fused_ordering(419) 00:14:52.778 fused_ordering(420) 00:14:52.778 fused_ordering(421) 00:14:52.778 fused_ordering(422) 00:14:52.778 fused_ordering(423) 00:14:52.778 fused_ordering(424) 00:14:52.778 fused_ordering(425) 00:14:52.778 fused_ordering(426) 00:14:52.778 fused_ordering(427) 00:14:52.778 fused_ordering(428) 00:14:52.778 fused_ordering(429) 00:14:52.778 fused_ordering(430) 00:14:52.778 fused_ordering(431) 00:14:52.778 fused_ordering(432) 00:14:52.778 fused_ordering(433) 00:14:52.778 fused_ordering(434) 00:14:52.778 fused_ordering(435) 00:14:52.778 fused_ordering(436) 00:14:52.778 fused_ordering(437) 00:14:52.778 fused_ordering(438) 00:14:52.778 fused_ordering(439) 00:14:52.778 fused_ordering(440) 00:14:52.778 fused_ordering(441) 00:14:52.778 fused_ordering(442) 00:14:52.778 fused_ordering(443) 00:14:52.778 fused_ordering(444) 00:14:52.778 fused_ordering(445) 00:14:52.778 fused_ordering(446) 00:14:52.778 fused_ordering(447) 00:14:52.778 fused_ordering(448) 00:14:52.778 fused_ordering(449) 00:14:52.778 fused_ordering(450) 00:14:52.778 fused_ordering(451) 00:14:52.778 fused_ordering(452) 00:14:52.778 fused_ordering(453) 00:14:52.778 fused_ordering(454) 00:14:52.778 fused_ordering(455) 00:14:52.778 fused_ordering(456) 00:14:52.778 fused_ordering(457) 00:14:52.778 fused_ordering(458) 00:14:52.778 fused_ordering(459) 00:14:52.778 fused_ordering(460) 00:14:52.778 fused_ordering(461) 00:14:52.778 fused_ordering(462) 00:14:52.778 fused_ordering(463) 00:14:52.778 fused_ordering(464) 00:14:52.778 fused_ordering(465) 00:14:52.778 fused_ordering(466) 00:14:52.778 fused_ordering(467) 00:14:52.778 fused_ordering(468) 00:14:52.778 fused_ordering(469) 00:14:52.778 fused_ordering(470) 00:14:52.778 fused_ordering(471) 00:14:52.778 fused_ordering(472) 00:14:52.778 fused_ordering(473) 00:14:52.778 fused_ordering(474) 00:14:52.778 fused_ordering(475) 00:14:52.778 fused_ordering(476) 00:14:52.778 fused_ordering(477) 00:14:52.778 fused_ordering(478) 00:14:52.778 fused_ordering(479) 00:14:52.778 fused_ordering(480) 00:14:52.778 fused_ordering(481) 00:14:52.778 fused_ordering(482) 00:14:52.778 fused_ordering(483) 00:14:52.779 fused_ordering(484) 00:14:52.779 fused_ordering(485) 00:14:52.779 fused_ordering(486) 00:14:52.779 fused_ordering(487) 00:14:52.779 fused_ordering(488) 00:14:52.779 fused_ordering(489) 00:14:52.779 fused_ordering(490) 00:14:52.779 fused_ordering(491) 00:14:52.779 fused_ordering(492) 00:14:52.779 fused_ordering(493) 00:14:52.779 fused_ordering(494) 00:14:52.779 fused_ordering(495) 00:14:52.779 fused_ordering(496) 00:14:52.779 fused_ordering(497) 00:14:52.779 fused_ordering(498) 00:14:52.779 fused_ordering(499) 00:14:52.779 fused_ordering(500) 00:14:52.779 fused_ordering(501) 00:14:52.779 fused_ordering(502) 00:14:52.779 fused_ordering(503) 00:14:52.779 fused_ordering(504) 00:14:52.779 fused_ordering(505) 00:14:52.779 fused_ordering(506) 00:14:52.779 fused_ordering(507) 00:14:52.779 fused_ordering(508) 00:14:52.779 fused_ordering(509) 00:14:52.779 fused_ordering(510) 00:14:52.779 fused_ordering(511) 00:14:52.779 fused_ordering(512) 00:14:52.779 fused_ordering(513) 00:14:52.779 fused_ordering(514) 00:14:52.779 fused_ordering(515) 00:14:52.779 fused_ordering(516) 00:14:52.779 fused_ordering(517) 00:14:52.779 fused_ordering(518) 00:14:52.779 fused_ordering(519) 00:14:52.779 fused_ordering(520) 00:14:52.779 fused_ordering(521) 00:14:52.779 fused_ordering(522) 00:14:52.779 fused_ordering(523) 00:14:52.779 fused_ordering(524) 00:14:52.779 fused_ordering(525) 00:14:52.779 fused_ordering(526) 00:14:52.779 fused_ordering(527) 00:14:52.779 fused_ordering(528) 00:14:52.779 fused_ordering(529) 00:14:52.779 fused_ordering(530) 00:14:52.779 fused_ordering(531) 00:14:52.779 fused_ordering(532) 00:14:52.779 fused_ordering(533) 00:14:52.779 fused_ordering(534) 00:14:52.779 fused_ordering(535) 00:14:52.779 fused_ordering(536) 00:14:52.779 fused_ordering(537) 00:14:52.779 fused_ordering(538) 00:14:52.779 fused_ordering(539) 00:14:52.779 fused_ordering(540) 00:14:52.779 fused_ordering(541) 00:14:52.779 fused_ordering(542) 00:14:52.779 fused_ordering(543) 00:14:52.779 fused_ordering(544) 00:14:52.779 fused_ordering(545) 00:14:52.779 fused_ordering(546) 00:14:52.779 fused_ordering(547) 00:14:52.779 fused_ordering(548) 00:14:52.779 fused_ordering(549) 00:14:52.779 fused_ordering(550) 00:14:52.779 fused_ordering(551) 00:14:52.779 fused_ordering(552) 00:14:52.779 fused_ordering(553) 00:14:52.779 fused_ordering(554) 00:14:52.779 fused_ordering(555) 00:14:52.779 fused_ordering(556) 00:14:52.779 fused_ordering(557) 00:14:52.779 fused_ordering(558) 00:14:52.779 fused_ordering(559) 00:14:52.779 fused_ordering(560) 00:14:52.779 fused_ordering(561) 00:14:52.779 fused_ordering(562) 00:14:52.779 fused_ordering(563) 00:14:52.779 fused_ordering(564) 00:14:52.779 fused_ordering(565) 00:14:52.779 fused_ordering(566) 00:14:52.779 fused_ordering(567) 00:14:52.779 fused_ordering(568) 00:14:52.779 fused_ordering(569) 00:14:52.779 fused_ordering(570) 00:14:52.779 fused_ordering(571) 00:14:52.779 fused_ordering(572) 00:14:52.779 fused_ordering(573) 00:14:52.779 fused_ordering(574) 00:14:52.779 fused_ordering(575) 00:14:52.779 fused_ordering(576) 00:14:52.779 fused_ordering(577) 00:14:52.779 fused_ordering(578) 00:14:52.779 fused_ordering(579) 00:14:52.779 fused_ordering(580) 00:14:52.779 fused_ordering(581) 00:14:52.779 fused_ordering(582) 00:14:52.779 fused_ordering(583) 00:14:52.779 fused_ordering(584) 00:14:52.779 fused_ordering(585) 00:14:52.779 fused_ordering(586) 00:14:52.779 fused_ordering(587) 00:14:52.779 fused_ordering(588) 00:14:52.779 fused_ordering(589) 00:14:52.779 fused_ordering(590) 00:14:52.779 fused_ordering(591) 00:14:52.779 fused_ordering(592) 00:14:52.779 fused_ordering(593) 00:14:52.779 fused_ordering(594) 00:14:52.779 fused_ordering(595) 00:14:52.779 fused_ordering(596) 00:14:52.779 fused_ordering(597) 00:14:52.779 fused_ordering(598) 00:14:52.779 fused_ordering(599) 00:14:52.779 fused_ordering(600) 00:14:52.779 fused_ordering(601) 00:14:52.779 fused_ordering(602) 00:14:52.779 fused_ordering(603) 00:14:52.779 fused_ordering(604) 00:14:52.779 fused_ordering(605) 00:14:52.779 fused_ordering(606) 00:14:52.779 fused_ordering(607) 00:14:52.779 fused_ordering(608) 00:14:52.779 fused_ordering(609) 00:14:52.779 fused_ordering(610) 00:14:52.779 fused_ordering(611) 00:14:52.779 fused_ordering(612) 00:14:52.779 fused_ordering(613) 00:14:52.779 fused_ordering(614) 00:14:52.779 fused_ordering(615) 00:14:53.349 fused_ordering(616) 00:14:53.349 fused_ordering(617) 00:14:53.349 fused_ordering(618) 00:14:53.349 fused_ordering(619) 00:14:53.349 fused_ordering(620) 00:14:53.349 fused_ordering(621) 00:14:53.349 fused_ordering(622) 00:14:53.349 fused_ordering(623) 00:14:53.349 fused_ordering(624) 00:14:53.349 fused_ordering(625) 00:14:53.349 fused_ordering(626) 00:14:53.349 fused_ordering(627) 00:14:53.349 fused_ordering(628) 00:14:53.349 fused_ordering(629) 00:14:53.349 fused_ordering(630) 00:14:53.349 fused_ordering(631) 00:14:53.349 fused_ordering(632) 00:14:53.349 fused_ordering(633) 00:14:53.349 fused_ordering(634) 00:14:53.349 fused_ordering(635) 00:14:53.349 fused_ordering(636) 00:14:53.349 fused_ordering(637) 00:14:53.349 fused_ordering(638) 00:14:53.349 fused_ordering(639) 00:14:53.349 fused_ordering(640) 00:14:53.349 fused_ordering(641) 00:14:53.349 fused_ordering(642) 00:14:53.349 fused_ordering(643) 00:14:53.349 fused_ordering(644) 00:14:53.349 fused_ordering(645) 00:14:53.349 fused_ordering(646) 00:14:53.349 fused_ordering(647) 00:14:53.349 fused_ordering(648) 00:14:53.349 fused_ordering(649) 00:14:53.349 fused_ordering(650) 00:14:53.349 fused_ordering(651) 00:14:53.349 fused_ordering(652) 00:14:53.349 fused_ordering(653) 00:14:53.349 fused_ordering(654) 00:14:53.349 fused_ordering(655) 00:14:53.349 fused_ordering(656) 00:14:53.349 fused_ordering(657) 00:14:53.349 fused_ordering(658) 00:14:53.349 fused_ordering(659) 00:14:53.349 fused_ordering(660) 00:14:53.349 fused_ordering(661) 00:14:53.349 fused_ordering(662) 00:14:53.349 fused_ordering(663) 00:14:53.349 fused_ordering(664) 00:14:53.349 fused_ordering(665) 00:14:53.349 fused_ordering(666) 00:14:53.349 fused_ordering(667) 00:14:53.349 fused_ordering(668) 00:14:53.350 fused_ordering(669) 00:14:53.350 fused_ordering(670) 00:14:53.350 fused_ordering(671) 00:14:53.350 fused_ordering(672) 00:14:53.350 fused_ordering(673) 00:14:53.350 fused_ordering(674) 00:14:53.350 fused_ordering(675) 00:14:53.350 fused_ordering(676) 00:14:53.350 fused_ordering(677) 00:14:53.350 fused_ordering(678) 00:14:53.350 fused_ordering(679) 00:14:53.350 fused_ordering(680) 00:14:53.350 fused_ordering(681) 00:14:53.350 fused_ordering(682) 00:14:53.350 fused_ordering(683) 00:14:53.350 fused_ordering(684) 00:14:53.350 fused_ordering(685) 00:14:53.350 fused_ordering(686) 00:14:53.350 fused_ordering(687) 00:14:53.350 fused_ordering(688) 00:14:53.350 fused_ordering(689) 00:14:53.350 fused_ordering(690) 00:14:53.350 fused_ordering(691) 00:14:53.350 fused_ordering(692) 00:14:53.350 fused_ordering(693) 00:14:53.350 fused_ordering(694) 00:14:53.350 fused_ordering(695) 00:14:53.350 fused_ordering(696) 00:14:53.350 fused_ordering(697) 00:14:53.350 fused_ordering(698) 00:14:53.350 fused_ordering(699) 00:14:53.350 fused_ordering(700) 00:14:53.350 fused_ordering(701) 00:14:53.350 fused_ordering(702) 00:14:53.350 fused_ordering(703) 00:14:53.350 fused_ordering(704) 00:14:53.350 fused_ordering(705) 00:14:53.350 fused_ordering(706) 00:14:53.350 fused_ordering(707) 00:14:53.350 fused_ordering(708) 00:14:53.350 fused_ordering(709) 00:14:53.350 fused_ordering(710) 00:14:53.350 fused_ordering(711) 00:14:53.350 fused_ordering(712) 00:14:53.350 fused_ordering(713) 00:14:53.350 fused_ordering(714) 00:14:53.350 fused_ordering(715) 00:14:53.350 fused_ordering(716) 00:14:53.350 fused_ordering(717) 00:14:53.350 fused_ordering(718) 00:14:53.350 fused_ordering(719) 00:14:53.350 fused_ordering(720) 00:14:53.350 fused_ordering(721) 00:14:53.350 fused_ordering(722) 00:14:53.350 fused_ordering(723) 00:14:53.350 fused_ordering(724) 00:14:53.350 fused_ordering(725) 00:14:53.350 fused_ordering(726) 00:14:53.350 fused_ordering(727) 00:14:53.350 fused_ordering(728) 00:14:53.350 fused_ordering(729) 00:14:53.350 fused_ordering(730) 00:14:53.350 fused_ordering(731) 00:14:53.350 fused_ordering(732) 00:14:53.350 fused_ordering(733) 00:14:53.350 fused_ordering(734) 00:14:53.350 fused_ordering(735) 00:14:53.350 fused_ordering(736) 00:14:53.350 fused_ordering(737) 00:14:53.350 fused_ordering(738) 00:14:53.350 fused_ordering(739) 00:14:53.350 fused_ordering(740) 00:14:53.350 fused_ordering(741) 00:14:53.350 fused_ordering(742) 00:14:53.350 fused_ordering(743) 00:14:53.350 fused_ordering(744) 00:14:53.350 fused_ordering(745) 00:14:53.350 fused_ordering(746) 00:14:53.350 fused_ordering(747) 00:14:53.350 fused_ordering(748) 00:14:53.350 fused_ordering(749) 00:14:53.350 fused_ordering(750) 00:14:53.350 fused_ordering(751) 00:14:53.350 fused_ordering(752) 00:14:53.350 fused_ordering(753) 00:14:53.350 fused_ordering(754) 00:14:53.350 fused_ordering(755) 00:14:53.350 fused_ordering(756) 00:14:53.350 fused_ordering(757) 00:14:53.350 fused_ordering(758) 00:14:53.350 fused_ordering(759) 00:14:53.350 fused_ordering(760) 00:14:53.350 fused_ordering(761) 00:14:53.350 fused_ordering(762) 00:14:53.350 fused_ordering(763) 00:14:53.350 fused_ordering(764) 00:14:53.350 fused_ordering(765) 00:14:53.350 fused_ordering(766) 00:14:53.350 fused_ordering(767) 00:14:53.350 fused_ordering(768) 00:14:53.350 fused_ordering(769) 00:14:53.350 fused_ordering(770) 00:14:53.350 fused_ordering(771) 00:14:53.350 fused_ordering(772) 00:14:53.350 fused_ordering(773) 00:14:53.350 fused_ordering(774) 00:14:53.350 fused_ordering(775) 00:14:53.350 fused_ordering(776) 00:14:53.350 fused_ordering(777) 00:14:53.350 fused_ordering(778) 00:14:53.350 fused_ordering(779) 00:14:53.350 fused_ordering(780) 00:14:53.350 fused_ordering(781) 00:14:53.350 fused_ordering(782) 00:14:53.350 fused_ordering(783) 00:14:53.350 fused_ordering(784) 00:14:53.350 fused_ordering(785) 00:14:53.350 fused_ordering(786) 00:14:53.350 fused_ordering(787) 00:14:53.350 fused_ordering(788) 00:14:53.350 fused_ordering(789) 00:14:53.350 fused_ordering(790) 00:14:53.350 fused_ordering(791) 00:14:53.350 fused_ordering(792) 00:14:53.350 fused_ordering(793) 00:14:53.350 fused_ordering(794) 00:14:53.350 fused_ordering(795) 00:14:53.350 fused_ordering(796) 00:14:53.350 fused_ordering(797) 00:14:53.350 fused_ordering(798) 00:14:53.350 fused_ordering(799) 00:14:53.350 fused_ordering(800) 00:14:53.350 fused_ordering(801) 00:14:53.350 fused_ordering(802) 00:14:53.350 fused_ordering(803) 00:14:53.350 fused_ordering(804) 00:14:53.350 fused_ordering(805) 00:14:53.350 fused_ordering(806) 00:14:53.350 fused_ordering(807) 00:14:53.350 fused_ordering(808) 00:14:53.350 fused_ordering(809) 00:14:53.350 fused_ordering(810) 00:14:53.350 fused_ordering(811) 00:14:53.350 fused_ordering(812) 00:14:53.350 fused_ordering(813) 00:14:53.350 fused_ordering(814) 00:14:53.350 fused_ordering(815) 00:14:53.350 fused_ordering(816) 00:14:53.350 fused_ordering(817) 00:14:53.350 fused_ordering(818) 00:14:53.350 fused_ordering(819) 00:14:53.350 fused_ordering(820) 00:14:54.294 fused_ordering(821) 00:14:54.294 fused_ordering(822) 00:14:54.294 fused_ordering(823) 00:14:54.294 fused_ordering(824) 00:14:54.294 fused_ordering(825) 00:14:54.294 fused_ordering(826) 00:14:54.294 fused_ordering(827) 00:14:54.294 fused_ordering(828) 00:14:54.294 fused_ordering(829) 00:14:54.294 fused_ordering(830) 00:14:54.294 fused_ordering(831) 00:14:54.294 fused_ordering(832) 00:14:54.294 fused_ordering(833) 00:14:54.294 fused_ordering(834) 00:14:54.294 fused_ordering(835) 00:14:54.294 fused_ordering(836) 00:14:54.294 fused_ordering(837) 00:14:54.294 fused_ordering(838) 00:14:54.294 fused_ordering(839) 00:14:54.294 fused_ordering(840) 00:14:54.294 fused_ordering(841) 00:14:54.294 fused_ordering(842) 00:14:54.294 fused_ordering(843) 00:14:54.294 fused_ordering(844) 00:14:54.294 fused_ordering(845) 00:14:54.294 fused_ordering(846) 00:14:54.294 fused_ordering(847) 00:14:54.294 fused_ordering(848) 00:14:54.294 fused_ordering(849) 00:14:54.294 fused_ordering(850) 00:14:54.294 fused_ordering(851) 00:14:54.294 fused_ordering(852) 00:14:54.294 fused_ordering(853) 00:14:54.294 fused_ordering(854) 00:14:54.294 fused_ordering(855) 00:14:54.294 fused_ordering(856) 00:14:54.294 fused_ordering(857) 00:14:54.294 fused_ordering(858) 00:14:54.294 fused_ordering(859) 00:14:54.294 fused_ordering(860) 00:14:54.294 fused_ordering(861) 00:14:54.294 fused_ordering(862) 00:14:54.294 fused_ordering(863) 00:14:54.294 fused_ordering(864) 00:14:54.294 fused_ordering(865) 00:14:54.294 fused_ordering(866) 00:14:54.294 fused_ordering(867) 00:14:54.294 fused_ordering(868) 00:14:54.294 fused_ordering(869) 00:14:54.294 fused_ordering(870) 00:14:54.294 fused_ordering(871) 00:14:54.294 fused_ordering(872) 00:14:54.294 fused_ordering(873) 00:14:54.294 fused_ordering(874) 00:14:54.294 fused_ordering(875) 00:14:54.294 fused_ordering(876) 00:14:54.294 fused_ordering(877) 00:14:54.294 fused_ordering(878) 00:14:54.294 fused_ordering(879) 00:14:54.294 fused_ordering(880) 00:14:54.294 fused_ordering(881) 00:14:54.294 fused_ordering(882) 00:14:54.294 fused_ordering(883) 00:14:54.294 fused_ordering(884) 00:14:54.294 fused_ordering(885) 00:14:54.294 fused_ordering(886) 00:14:54.294 fused_ordering(887) 00:14:54.294 fused_ordering(888) 00:14:54.294 fused_ordering(889) 00:14:54.294 fused_ordering(890) 00:14:54.294 fused_ordering(891) 00:14:54.294 fused_ordering(892) 00:14:54.294 fused_ordering(893) 00:14:54.294 fused_ordering(894) 00:14:54.294 fused_ordering(895) 00:14:54.294 fused_ordering(896) 00:14:54.294 fused_ordering(897) 00:14:54.294 fused_ordering(898) 00:14:54.294 fused_ordering(899) 00:14:54.294 fused_ordering(900) 00:14:54.294 fused_ordering(901) 00:14:54.294 fused_ordering(902) 00:14:54.294 fused_ordering(903) 00:14:54.294 fused_ordering(904) 00:14:54.294 fused_ordering(905) 00:14:54.294 fused_ordering(906) 00:14:54.294 fused_ordering(907) 00:14:54.294 fused_ordering(908) 00:14:54.294 fused_ordering(909) 00:14:54.294 fused_ordering(910) 00:14:54.294 fused_ordering(911) 00:14:54.294 fused_ordering(912) 00:14:54.294 fused_ordering(913) 00:14:54.294 fused_ordering(914) 00:14:54.294 fused_ordering(915) 00:14:54.294 fused_ordering(916) 00:14:54.294 fused_ordering(917) 00:14:54.295 fused_ordering(918) 00:14:54.295 fused_ordering(919) 00:14:54.295 fused_ordering(920) 00:14:54.295 fused_ordering(921) 00:14:54.295 fused_ordering(922) 00:14:54.295 fused_ordering(923) 00:14:54.295 fused_ordering(924) 00:14:54.295 fused_ordering(925) 00:14:54.295 fused_ordering(926) 00:14:54.295 fused_ordering(927) 00:14:54.295 fused_ordering(928) 00:14:54.295 fused_ordering(929) 00:14:54.295 fused_ordering(930) 00:14:54.295 fused_ordering(931) 00:14:54.295 fused_ordering(932) 00:14:54.295 fused_ordering(933) 00:14:54.295 fused_ordering(934) 00:14:54.295 fused_ordering(935) 00:14:54.295 fused_ordering(936) 00:14:54.295 fused_ordering(937) 00:14:54.295 fused_ordering(938) 00:14:54.295 fused_ordering(939) 00:14:54.295 fused_ordering(940) 00:14:54.295 fused_ordering(941) 00:14:54.295 fused_ordering(942) 00:14:54.295 fused_ordering(943) 00:14:54.295 fused_ordering(944) 00:14:54.295 fused_ordering(945) 00:14:54.295 fused_ordering(946) 00:14:54.295 fused_ordering(947) 00:14:54.295 fused_ordering(948) 00:14:54.295 fused_ordering(949) 00:14:54.295 fused_ordering(950) 00:14:54.295 fused_ordering(951) 00:14:54.295 fused_ordering(952) 00:14:54.295 fused_ordering(953) 00:14:54.295 fused_ordering(954) 00:14:54.295 fused_ordering(955) 00:14:54.295 fused_ordering(956) 00:14:54.295 fused_ordering(957) 00:14:54.295 fused_ordering(958) 00:14:54.295 fused_ordering(959) 00:14:54.295 fused_ordering(960) 00:14:54.295 fused_ordering(961) 00:14:54.295 fused_ordering(962) 00:14:54.295 fused_ordering(963) 00:14:54.295 fused_ordering(964) 00:14:54.295 fused_ordering(965) 00:14:54.295 fused_ordering(966) 00:14:54.295 fused_ordering(967) 00:14:54.295 fused_ordering(968) 00:14:54.295 fused_ordering(969) 00:14:54.295 fused_ordering(970) 00:14:54.295 fused_ordering(971) 00:14:54.295 fused_ordering(972) 00:14:54.295 fused_ordering(973) 00:14:54.295 fused_ordering(974) 00:14:54.295 fused_ordering(975) 00:14:54.295 fused_ordering(976) 00:14:54.295 fused_ordering(977) 00:14:54.295 fused_ordering(978) 00:14:54.295 fused_ordering(979) 00:14:54.295 fused_ordering(980) 00:14:54.295 fused_ordering(981) 00:14:54.295 fused_ordering(982) 00:14:54.295 fused_ordering(983) 00:14:54.295 fused_ordering(984) 00:14:54.295 fused_ordering(985) 00:14:54.295 fused_ordering(986) 00:14:54.295 fused_ordering(987) 00:14:54.295 fused_ordering(988) 00:14:54.295 fused_ordering(989) 00:14:54.295 fused_ordering(990) 00:14:54.295 fused_ordering(991) 00:14:54.295 fused_ordering(992) 00:14:54.295 fused_ordering(993) 00:14:54.295 fused_ordering(994) 00:14:54.295 fused_ordering(995) 00:14:54.295 fused_ordering(996) 00:14:54.295 fused_ordering(997) 00:14:54.295 fused_ordering(998) 00:14:54.295 fused_ordering(999) 00:14:54.295 fused_ordering(1000) 00:14:54.295 fused_ordering(1001) 00:14:54.295 fused_ordering(1002) 00:14:54.295 fused_ordering(1003) 00:14:54.295 fused_ordering(1004) 00:14:54.295 fused_ordering(1005) 00:14:54.295 fused_ordering(1006) 00:14:54.295 fused_ordering(1007) 00:14:54.295 fused_ordering(1008) 00:14:54.295 fused_ordering(1009) 00:14:54.295 fused_ordering(1010) 00:14:54.295 fused_ordering(1011) 00:14:54.295 fused_ordering(1012) 00:14:54.295 fused_ordering(1013) 00:14:54.295 fused_ordering(1014) 00:14:54.295 fused_ordering(1015) 00:14:54.295 fused_ordering(1016) 00:14:54.295 fused_ordering(1017) 00:14:54.295 fused_ordering(1018) 00:14:54.295 fused_ordering(1019) 00:14:54.295 fused_ordering(1020) 00:14:54.295 fused_ordering(1021) 00:14:54.295 fused_ordering(1022) 00:14:54.295 fused_ordering(1023) 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.295 rmmod nvme_tcp 00:14:54.295 rmmod nvme_fabrics 00:14:54.295 rmmod nvme_keyring 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 204194 ']' 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 204194 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 204194 ']' 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 204194 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 204194 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 204194' 00:14:54.295 killing process with pid 204194 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 204194 00:14:54.295 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 204194 00:14:54.557 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:54.557 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:54.557 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:54.557 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.557 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:54.557 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.557 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.557 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.618 00:14:56.618 real 0m13.291s 00:14:56.618 user 0m7.387s 00:14:56.618 sys 0m7.400s 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:56.618 ************************************ 00:14:56.618 END TEST nvmf_fused_ordering 00:14:56.618 ************************************ 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.618 ************************************ 00:14:56.618 START TEST nvmf_ns_masking 00:14:56.618 ************************************ 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:56.618 * Looking for test storage... 00:14:56.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.618 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:56.619 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e400043a-83a8-4455-a547-a2b95640add1 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=88970cea-0027-42dc-b3d9-2076eb77c178 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ac629980-c37a-43bb-b7a7-97213cf1038a 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:56.880 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:03.471 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.471 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:03.472 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:03.472 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:03.472 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.472 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:03.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:15:03.734 00:15:03.734 --- 10.0.0.2 ping statistics --- 00:15:03.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.734 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:15:03.734 00:15:03.734 --- 10.0.0.1 ping statistics --- 00:15:03.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.734 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=209224 00:15:03.734 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 209224 00:15:03.735 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 209224 ']' 00:15:03.735 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.735 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:03.735 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.735 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.735 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.735 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:03.996 [2024-07-25 15:10:55.943275] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:03.996 [2024-07-25 15:10:55.943343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.996 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.996 [2024-07-25 15:10:56.013308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.996 [2024-07-25 15:10:56.086318] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.996 [2024-07-25 15:10:56.086357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.996 [2024-07-25 15:10:56.086365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.996 [2024-07-25 15:10:56.086372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.996 [2024-07-25 15:10:56.086378] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.996 [2024-07-25 15:10:56.086397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.568 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.568 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:04.568 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.568 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:04.568 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:04.568 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.568 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:04.830 [2024-07-25 15:10:56.893375] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.830 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:04.830 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:04.830 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:05.091 Malloc1 00:15:05.091 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:05.091 Malloc2 00:15:05.352 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:05.352 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:05.613 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.613 [2024-07-25 15:10:57.783986] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.875 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:05.875 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ac629980-c37a-43bb-b7a7-97213cf1038a -a 10.0.0.2 -s 4420 -i 4 00:15:05.875 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:05.875 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:05.875 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.875 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:05.875 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.421 [ 0]:0x1 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da32a599b4ef46a2bc8b8e8f819dbb12 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da32a599b4ef46a2bc8b8e8f819dbb12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.421 [ 0]:0x1 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da32a599b4ef46a2bc8b8e8f819dbb12 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da32a599b4ef46a2bc8b8e8f819dbb12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.421 [ 1]:0x2 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=641c8c6022d54d86877765617f43390d 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 641c8c6022d54d86877765617f43390d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:08.421 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.682 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.943 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:08.943 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:08.943 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ac629980-c37a-43bb-b7a7-97213cf1038a -a 10.0.0.2 -s 4420 -i 4 00:15:09.204 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:09.204 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:09.204 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.204 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:09.204 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:09.204 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:11.120 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:11.120 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:11.120 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:11.382 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.383 [ 0]:0x2 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.383 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=641c8c6022d54d86877765617f43390d 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 641c8c6022d54d86877765617f43390d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.649 [ 0]:0x1 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da32a599b4ef46a2bc8b8e8f819dbb12 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da32a599b4ef46a2bc8b8e8f819dbb12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.649 [ 1]:0x2 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.649 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.910 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=641c8c6022d54d86877765617f43390d 00:15:11.910 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 641c8c6022d54d86877765617f43390d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.910 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.910 [ 0]:0x2 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.910 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.172 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=641c8c6022d54d86877765617f43390d 00:15:12.172 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 641c8c6022d54d86877765617f43390d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.172 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:12.172 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.172 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:12.172 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:12.172 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ac629980-c37a-43bb-b7a7-97213cf1038a -a 10.0.0.2 -s 4420 -i 4 00:15:12.434 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:12.434 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:12.434 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.434 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:12.434 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:12.434 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:14.343 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:14.343 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:14.343 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.343 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:14.343 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.343 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:14.343 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:14.343 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.603 [ 0]:0x1 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da32a599b4ef46a2bc8b8e8f819dbb12 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da32a599b4ef46a2bc8b8e8f819dbb12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.603 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.865 [ 1]:0x2 00:15:14.865 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.865 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.865 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=641c8c6022d54d86877765617f43390d 00:15:14.865 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 641c8c6022d54d86877765617f43390d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.865 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.865 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:15.126 [ 0]:0x2 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=641c8c6022d54d86877765617f43390d 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 641c8c6022d54d86877765617f43390d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:15.126 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:15.126 [2024-07-25 15:11:07.287104] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:15.126 request: 00:15:15.126 { 00:15:15.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.126 "nsid": 2, 00:15:15.126 "host": "nqn.2016-06.io.spdk:host1", 00:15:15.126 "method": "nvmf_ns_remove_host", 00:15:15.126 "req_id": 1 00:15:15.126 } 00:15:15.126 Got JSON-RPC error response 00:15:15.126 response: 00:15:15.126 { 00:15:15.126 "code": -32602, 00:15:15.126 "message": "Invalid parameters" 00:15:15.126 } 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:15.387 [ 0]:0x2 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=641c8c6022d54d86877765617f43390d 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 641c8c6022d54d86877765617f43390d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:15.387 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=211586 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 211586 /var/tmp/host.sock 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 211586 ']' 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:15.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.649 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.649 [2024-07-25 15:11:07.672451] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:15.649 [2024-07-25 15:11:07.672502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211586 ] 00:15:15.649 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.649 [2024-07-25 15:11:07.748837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.649 [2024-07-25 15:11:07.813209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.622 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.622 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:16.622 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.622 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.622 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e400043a-83a8-4455-a547-a2b95640add1 00:15:16.622 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:16.622 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E400043A83A84455A547A2B95640ADD1 -i 00:15:16.882 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 88970cea-0027-42dc-b3d9-2076eb77c178 00:15:16.882 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:16.882 15:11:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 88970CEA002742DCB3D92076EB77C178 -i 00:15:17.142 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:17.142 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:17.404 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:17.404 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:17.664 nvme0n1 00:15:17.664 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:17.664 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:18.234 nvme1n2 00:15:18.234 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:18.234 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:18.234 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:18.234 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:18.234 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:18.234 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:18.234 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:18.234 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:18.234 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e400043a-83a8-4455-a547-a2b95640add1 == \e\4\0\0\0\4\3\a\-\8\3\a\8\-\4\4\5\5\-\a\5\4\7\-\a\2\b\9\5\6\4\0\a\d\d\1 ]] 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 88970cea-0027-42dc-b3d9-2076eb77c178 == \8\8\9\7\0\c\e\a\-\0\0\2\7\-\4\2\d\c\-\b\3\d\9\-\2\0\7\6\e\b\7\7\c\1\7\8 ]] 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 211586 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 211586 ']' 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 211586 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:18.494 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.495 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 211586 00:15:18.756 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:18.756 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:18.756 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 211586' 00:15:18.756 killing process with pid 211586 00:15:18.756 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 211586 00:15:18.756 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 211586 00:15:18.756 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.017 rmmod nvme_tcp 00:15:19.017 rmmod nvme_fabrics 00:15:19.017 rmmod nvme_keyring 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:19.017 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 209224 ']' 00:15:19.018 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 209224 00:15:19.018 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 209224 ']' 00:15:19.018 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 209224 00:15:19.018 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:19.018 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.018 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 209224 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 209224' 00:15:19.278 killing process with pid 209224 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 209224 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 209224 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.278 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.828 00:15:21.828 real 0m24.757s 00:15:21.828 user 0m24.978s 00:15:21.828 sys 0m7.329s 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:21.828 ************************************ 00:15:21.828 END TEST nvmf_ns_masking 00:15:21.828 ************************************ 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.828 ************************************ 00:15:21.828 START TEST nvmf_nvme_cli 00:15:21.828 ************************************ 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.828 * Looking for test storage... 00:15:21.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.828 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.829 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.971 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:29.972 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:29.972 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:29.972 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:29.972 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:29.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:15:29.972 00:15:29.972 --- 10.0.0.2 ping statistics --- 00:15:29.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.972 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:29.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:15:29.972 00:15:29.972 --- 10.0.0.1 ping statistics --- 00:15:29.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.972 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=216437 00:15:29.972 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 216437 00:15:29.972 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.972 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 216437 ']' 00:15:29.972 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.972 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 [2024-07-25 15:11:21.057052] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:29.973 [2024-07-25 15:11:21.057103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.973 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.973 [2024-07-25 15:11:21.122857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.973 [2024-07-25 15:11:21.188477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.973 [2024-07-25 15:11:21.188517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.973 [2024-07-25 15:11:21.188524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.973 [2024-07-25 15:11:21.188531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.973 [2024-07-25 15:11:21.188536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.973 [2024-07-25 15:11:21.188677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.973 [2024-07-25 15:11:21.188810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.973 [2024-07-25 15:11:21.188972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.973 [2024-07-25 15:11:21.188973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 [2024-07-25 15:11:21.874256] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 Malloc0 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 Malloc1 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 [2024-07-25 15:11:21.964044] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.973 15:11:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:29.973 00:15:29.973 Discovery Log Number of Records 2, Generation counter 2 00:15:29.973 =====Discovery Log Entry 0====== 00:15:29.973 trtype: tcp 00:15:29.973 adrfam: ipv4 00:15:29.973 subtype: current discovery subsystem 00:15:29.973 treq: not required 00:15:29.973 portid: 0 00:15:29.973 trsvcid: 4420 00:15:29.973 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:29.973 traddr: 10.0.0.2 00:15:29.973 eflags: explicit discovery connections, duplicate discovery information 00:15:29.973 sectype: none 00:15:29.973 =====Discovery Log Entry 1====== 00:15:29.973 trtype: tcp 00:15:29.973 adrfam: ipv4 00:15:29.973 subtype: nvme subsystem 00:15:29.973 treq: not required 00:15:29.973 portid: 0 00:15:29.973 trsvcid: 4420 00:15:29.973 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:29.973 traddr: 10.0.0.2 00:15:29.973 eflags: none 00:15:29.973 sectype: none 00:15:29.973 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:29.973 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:29.973 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:30.235 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.235 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:30.235 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:30.235 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.235 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:30.235 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.235 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:30.235 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:31.620 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:31.620 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:31.620 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:31.620 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:31.620 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:31.620 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:33.535 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:33.795 /dev/nvme0n1 ]] 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:33.795 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:34.055 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:34.316 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.317 rmmod nvme_tcp 00:15:34.317 rmmod nvme_fabrics 00:15:34.317 rmmod nvme_keyring 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 216437 ']' 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 216437 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 216437 ']' 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 216437 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 216437 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 216437' 00:15:34.317 killing process with pid 216437 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 216437 00:15:34.317 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 216437 00:15:34.578 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.578 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.578 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.578 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.578 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.578 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.578 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.578 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:37.125 00:15:37.125 real 0m15.174s 00:15:37.125 user 0m23.764s 00:15:37.125 sys 0m6.024s 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:37.125 ************************************ 00:15:37.125 END TEST nvmf_nvme_cli 00:15:37.125 ************************************ 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.125 ************************************ 00:15:37.125 START TEST nvmf_vfio_user 00:15:37.125 ************************************ 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:37.125 * Looking for test storage... 00:15:37.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.125 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=218212 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 218212' 00:15:37.126 Process pid: 218212 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 218212 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 218212 ']' 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.126 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:37.126 [2024-07-25 15:11:28.974476] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:37.126 [2024-07-25 15:11:28.974550] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.126 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.126 [2024-07-25 15:11:29.036158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.126 [2024-07-25 15:11:29.101536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.126 [2024-07-25 15:11:29.101574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.126 [2024-07-25 15:11:29.101582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.126 [2024-07-25 15:11:29.101589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.126 [2024-07-25 15:11:29.101594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.126 [2024-07-25 15:11:29.101734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.126 [2024-07-25 15:11:29.101859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.126 [2024-07-25 15:11:29.102013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.126 [2024-07-25 15:11:29.102015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.698 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.698 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:37.698 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:38.675 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:38.965 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:38.965 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:38.965 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.965 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:38.965 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:38.965 Malloc1 00:15:38.965 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:39.226 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:39.488 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:39.488 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.488 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:39.488 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:39.749 Malloc2 00:15:39.749 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:40.011 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:40.011 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:40.273 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:40.273 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:40.273 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:40.273 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:40.274 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:40.274 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:40.274 [2024-07-25 15:11:32.340157] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:40.274 [2024-07-25 15:11:32.340198] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218880 ] 00:15:40.274 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.274 [2024-07-25 15:11:32.372858] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:40.274 [2024-07-25 15:11:32.377468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:40.274 [2024-07-25 15:11:32.377489] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb79f3bf000 00:15:40.274 [2024-07-25 15:11:32.378472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:40.274 [2024-07-25 15:11:32.379460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:40.274 [2024-07-25 15:11:32.380474] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:40.274 [2024-07-25 15:11:32.381475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:40.274 [2024-07-25 15:11:32.382482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:40.274 [2024-07-25 15:11:32.383484] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:40.274 [2024-07-25 15:11:32.384495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:40.274 [2024-07-25 15:11:32.385504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:40.274 [2024-07-25 15:11:32.386507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:40.274 [2024-07-25 15:11:32.386517] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb79f3b4000 00:15:40.274 [2024-07-25 15:11:32.387844] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:40.274 [2024-07-25 15:11:32.408377] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:40.274 [2024-07-25 15:11:32.408401] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:40.274 [2024-07-25 15:11:32.410668] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:40.274 [2024-07-25 15:11:32.410722] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:40.274 [2024-07-25 15:11:32.410816] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:40.274 [2024-07-25 15:11:32.410833] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:40.274 [2024-07-25 15:11:32.410839] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:40.274 [2024-07-25 15:11:32.411664] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:40.274 [2024-07-25 15:11:32.411676] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:40.274 [2024-07-25 15:11:32.411683] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:40.274 [2024-07-25 15:11:32.415208] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:40.274 [2024-07-25 15:11:32.415218] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:40.274 [2024-07-25 15:11:32.415225] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:40.274 [2024-07-25 15:11:32.415688] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:40.274 [2024-07-25 15:11:32.415696] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:40.274 [2024-07-25 15:11:32.416696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:40.274 [2024-07-25 15:11:32.416705] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:40.274 [2024-07-25 15:11:32.416710] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:40.274 [2024-07-25 15:11:32.416716] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:40.274 [2024-07-25 15:11:32.416821] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:40.274 [2024-07-25 15:11:32.416826] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:40.274 [2024-07-25 15:11:32.416831] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:40.274 [2024-07-25 15:11:32.417709] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:40.274 [2024-07-25 15:11:32.418707] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:40.274 [2024-07-25 15:11:32.419723] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:40.274 [2024-07-25 15:11:32.420726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:40.274 [2024-07-25 15:11:32.420788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:40.274 [2024-07-25 15:11:32.421737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:40.274 [2024-07-25 15:11:32.421744] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:40.274 [2024-07-25 15:11:32.421749] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:40.274 [2024-07-25 15:11:32.421770] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:40.274 [2024-07-25 15:11:32.421777] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:40.274 [2024-07-25 15:11:32.421792] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:40.274 [2024-07-25 15:11:32.421797] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:40.274 [2024-07-25 15:11:32.421801] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:40.274 [2024-07-25 15:11:32.421814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:40.274 [2024-07-25 15:11:32.421854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:40.274 [2024-07-25 15:11:32.421864] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:40.274 [2024-07-25 15:11:32.421869] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:40.274 [2024-07-25 15:11:32.421873] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:40.274 [2024-07-25 15:11:32.421878] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:40.274 [2024-07-25 15:11:32.421883] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:40.274 [2024-07-25 15:11:32.421887] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:40.274 [2024-07-25 15:11:32.421892] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:40.274 [2024-07-25 15:11:32.421900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:40.274 [2024-07-25 15:11:32.421912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:40.274 [2024-07-25 15:11:32.421927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:40.274 [2024-07-25 15:11:32.421943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.275 [2024-07-25 15:11:32.421952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.275 [2024-07-25 15:11:32.421960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.275 [2024-07-25 15:11:32.421968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.275 [2024-07-25 15:11:32.421973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.421981] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.421992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422007] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:40.275 [2024-07-25 15:11:32.422012] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422027] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422119] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:40.275 [2024-07-25 15:11:32.422124] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:40.275 [2024-07-25 15:11:32.422127] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:40.275 [2024-07-25 15:11:32.422133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422154] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:40.275 [2024-07-25 15:11:32.422163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422171] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422178] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:40.275 [2024-07-25 15:11:32.422182] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:40.275 [2024-07-25 15:11:32.422185] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:40.275 [2024-07-25 15:11:32.422191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422240] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:40.275 [2024-07-25 15:11:32.422245] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:40.275 [2024-07-25 15:11:32.422248] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:40.275 [2024-07-25 15:11:32.422254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422285] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422292] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422297] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422308] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:40.275 [2024-07-25 15:11:32.422312] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:40.275 [2024-07-25 15:11:32.422317] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:40.275 [2024-07-25 15:11:32.422335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422410] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:40.275 [2024-07-25 15:11:32.422414] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:40.275 [2024-07-25 15:11:32.422418] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:40.275 [2024-07-25 15:11:32.422422] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:40.275 [2024-07-25 15:11:32.422425] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:40.275 [2024-07-25 15:11:32.422433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:40.275 [2024-07-25 15:11:32.422440] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:40.275 [2024-07-25 15:11:32.422445] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:40.275 [2024-07-25 15:11:32.422448] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:40.275 [2024-07-25 15:11:32.422454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422461] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:40.275 [2024-07-25 15:11:32.422465] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:40.275 [2024-07-25 15:11:32.422469] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:40.275 [2024-07-25 15:11:32.422474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422482] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:40.275 [2024-07-25 15:11:32.422486] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:40.275 [2024-07-25 15:11:32.422489] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:40.275 [2024-07-25 15:11:32.422495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:40.275 [2024-07-25 15:11:32.422502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:40.275 [2024-07-25 15:11:32.422532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:40.275 ===================================================== 00:15:40.275 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:40.275 ===================================================== 00:15:40.275 Controller Capabilities/Features 00:15:40.275 ================================ 00:15:40.275 Vendor ID: 4e58 00:15:40.275 Subsystem Vendor ID: 4e58 00:15:40.275 Serial Number: SPDK1 00:15:40.276 Model Number: SPDK bdev Controller 00:15:40.276 Firmware Version: 24.09 00:15:40.276 Recommended Arb Burst: 6 00:15:40.276 IEEE OUI Identifier: 8d 6b 50 00:15:40.276 Multi-path I/O 00:15:40.276 May have multiple subsystem ports: Yes 00:15:40.276 May have multiple controllers: Yes 00:15:40.276 Associated with SR-IOV VF: No 00:15:40.276 Max Data Transfer Size: 131072 00:15:40.276 Max Number of Namespaces: 32 00:15:40.276 Max Number of I/O Queues: 127 00:15:40.276 NVMe Specification Version (VS): 1.3 00:15:40.276 NVMe Specification Version (Identify): 1.3 00:15:40.276 Maximum Queue Entries: 256 00:15:40.276 Contiguous Queues Required: Yes 00:15:40.276 Arbitration Mechanisms Supported 00:15:40.276 Weighted Round Robin: Not Supported 00:15:40.276 Vendor Specific: Not Supported 00:15:40.276 Reset Timeout: 15000 ms 00:15:40.276 Doorbell Stride: 4 bytes 00:15:40.276 NVM Subsystem Reset: Not Supported 00:15:40.276 Command Sets Supported 00:15:40.276 NVM Command Set: Supported 00:15:40.276 Boot Partition: Not Supported 00:15:40.276 Memory Page Size Minimum: 4096 bytes 00:15:40.276 Memory Page Size Maximum: 4096 bytes 00:15:40.276 Persistent Memory Region: Not Supported 00:15:40.276 Optional Asynchronous Events Supported 00:15:40.276 Namespace Attribute Notices: Supported 00:15:40.276 Firmware Activation Notices: Not Supported 00:15:40.276 ANA Change Notices: Not Supported 00:15:40.276 PLE Aggregate Log Change Notices: Not Supported 00:15:40.276 LBA Status Info Alert Notices: Not Supported 00:15:40.276 EGE Aggregate Log Change Notices: Not Supported 00:15:40.276 Normal NVM Subsystem Shutdown event: Not Supported 00:15:40.276 Zone Descriptor Change Notices: Not Supported 00:15:40.276 Discovery Log Change Notices: Not Supported 00:15:40.276 Controller Attributes 00:15:40.276 128-bit Host Identifier: Supported 00:15:40.276 Non-Operational Permissive Mode: Not Supported 00:15:40.276 NVM Sets: Not Supported 00:15:40.276 Read Recovery Levels: Not Supported 00:15:40.276 Endurance Groups: Not Supported 00:15:40.276 Predictable Latency Mode: Not Supported 00:15:40.276 Traffic Based Keep ALive: Not Supported 00:15:40.276 Namespace Granularity: Not Supported 00:15:40.276 SQ Associations: Not Supported 00:15:40.276 UUID List: Not Supported 00:15:40.276 Multi-Domain Subsystem: Not Supported 00:15:40.276 Fixed Capacity Management: Not Supported 00:15:40.276 Variable Capacity Management: Not Supported 00:15:40.276 Delete Endurance Group: Not Supported 00:15:40.276 Delete NVM Set: Not Supported 00:15:40.276 Extended LBA Formats Supported: Not Supported 00:15:40.276 Flexible Data Placement Supported: Not Supported 00:15:40.276 00:15:40.276 Controller Memory Buffer Support 00:15:40.276 ================================ 00:15:40.276 Supported: No 00:15:40.276 00:15:40.276 Persistent Memory Region Support 00:15:40.276 ================================ 00:15:40.276 Supported: No 00:15:40.276 00:15:40.276 Admin Command Set Attributes 00:15:40.276 ============================ 00:15:40.276 Security Send/Receive: Not Supported 00:15:40.276 Format NVM: Not Supported 00:15:40.276 Firmware Activate/Download: Not Supported 00:15:40.276 Namespace Management: Not Supported 00:15:40.276 Device Self-Test: Not Supported 00:15:40.276 Directives: Not Supported 00:15:40.276 NVMe-MI: Not Supported 00:15:40.276 Virtualization Management: Not Supported 00:15:40.276 Doorbell Buffer Config: Not Supported 00:15:40.276 Get LBA Status Capability: Not Supported 00:15:40.276 Command & Feature Lockdown Capability: Not Supported 00:15:40.276 Abort Command Limit: 4 00:15:40.276 Async Event Request Limit: 4 00:15:40.276 Number of Firmware Slots: N/A 00:15:40.276 Firmware Slot 1 Read-Only: N/A 00:15:40.276 Firmware Activation Without Reset: N/A 00:15:40.276 Multiple Update Detection Support: N/A 00:15:40.276 Firmware Update Granularity: No Information Provided 00:15:40.276 Per-Namespace SMART Log: No 00:15:40.276 Asymmetric Namespace Access Log Page: Not Supported 00:15:40.276 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:40.276 Command Effects Log Page: Supported 00:15:40.276 Get Log Page Extended Data: Supported 00:15:40.276 Telemetry Log Pages: Not Supported 00:15:40.276 Persistent Event Log Pages: Not Supported 00:15:40.276 Supported Log Pages Log Page: May Support 00:15:40.276 Commands Supported & Effects Log Page: Not Supported 00:15:40.276 Feature Identifiers & Effects Log Page:May Support 00:15:40.276 NVMe-MI Commands & Effects Log Page: May Support 00:15:40.276 Data Area 4 for Telemetry Log: Not Supported 00:15:40.276 Error Log Page Entries Supported: 128 00:15:40.276 Keep Alive: Supported 00:15:40.276 Keep Alive Granularity: 10000 ms 00:15:40.276 00:15:40.276 NVM Command Set Attributes 00:15:40.276 ========================== 00:15:40.276 Submission Queue Entry Size 00:15:40.276 Max: 64 00:15:40.276 Min: 64 00:15:40.276 Completion Queue Entry Size 00:15:40.276 Max: 16 00:15:40.276 Min: 16 00:15:40.276 Number of Namespaces: 32 00:15:40.276 Compare Command: Supported 00:15:40.276 Write Uncorrectable Command: Not Supported 00:15:40.276 Dataset Management Command: Supported 00:15:40.276 Write Zeroes Command: Supported 00:15:40.276 Set Features Save Field: Not Supported 00:15:40.276 Reservations: Not Supported 00:15:40.276 Timestamp: Not Supported 00:15:40.276 Copy: Supported 00:15:40.276 Volatile Write Cache: Present 00:15:40.276 Atomic Write Unit (Normal): 1 00:15:40.276 Atomic Write Unit (PFail): 1 00:15:40.276 Atomic Compare & Write Unit: 1 00:15:40.276 Fused Compare & Write: Supported 00:15:40.276 Scatter-Gather List 00:15:40.276 SGL Command Set: Supported (Dword aligned) 00:15:40.276 SGL Keyed: Not Supported 00:15:40.276 SGL Bit Bucket Descriptor: Not Supported 00:15:40.276 SGL Metadata Pointer: Not Supported 00:15:40.276 Oversized SGL: Not Supported 00:15:40.276 SGL Metadata Address: Not Supported 00:15:40.276 SGL Offset: Not Supported 00:15:40.276 Transport SGL Data Block: Not Supported 00:15:40.276 Replay Protected Memory Block: Not Supported 00:15:40.276 00:15:40.276 Firmware Slot Information 00:15:40.276 ========================= 00:15:40.276 Active slot: 1 00:15:40.276 Slot 1 Firmware Revision: 24.09 00:15:40.276 00:15:40.276 00:15:40.276 Commands Supported and Effects 00:15:40.276 ============================== 00:15:40.276 Admin Commands 00:15:40.276 -------------- 00:15:40.276 Get Log Page (02h): Supported 00:15:40.276 Identify (06h): Supported 00:15:40.276 Abort (08h): Supported 00:15:40.276 Set Features (09h): Supported 00:15:40.276 Get Features (0Ah): Supported 00:15:40.276 Asynchronous Event Request (0Ch): Supported 00:15:40.276 Keep Alive (18h): Supported 00:15:40.276 I/O Commands 00:15:40.276 ------------ 00:15:40.276 Flush (00h): Supported LBA-Change 00:15:40.276 Write (01h): Supported LBA-Change 00:15:40.276 Read (02h): Supported 00:15:40.276 Compare (05h): Supported 00:15:40.276 Write Zeroes (08h): Supported LBA-Change 00:15:40.276 Dataset Management (09h): Supported LBA-Change 00:15:40.276 Copy (19h): Supported LBA-Change 00:15:40.276 00:15:40.276 Error Log 00:15:40.276 ========= 00:15:40.276 00:15:40.276 Arbitration 00:15:40.277 =========== 00:15:40.277 Arbitration Burst: 1 00:15:40.277 00:15:40.277 Power Management 00:15:40.277 ================ 00:15:40.277 Number of Power States: 1 00:15:40.277 Current Power State: Power State #0 00:15:40.277 Power State #0: 00:15:40.277 Max Power: 0.00 W 00:15:40.277 Non-Operational State: Operational 00:15:40.277 Entry Latency: Not Reported 00:15:40.277 Exit Latency: Not Reported 00:15:40.277 Relative Read Throughput: 0 00:15:40.277 Relative Read Latency: 0 00:15:40.277 Relative Write Throughput: 0 00:15:40.277 Relative Write Latency: 0 00:15:40.277 Idle Power: Not Reported 00:15:40.277 Active Power: Not Reported 00:15:40.277 Non-Operational Permissive Mode: Not Supported 00:15:40.277 00:15:40.277 Health Information 00:15:40.277 ================== 00:15:40.277 Critical Warnings: 00:15:40.277 Available Spare Space: OK 00:15:40.277 Temperature: OK 00:15:40.277 Device Reliability: OK 00:15:40.277 Read Only: No 00:15:40.277 Volatile Memory Backup: OK 00:15:40.277 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:40.277 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:40.277 Available Spare: 0% 00:15:40.277 Available Sp[2024-07-25 15:11:32.422629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:40.277 [2024-07-25 15:11:32.422638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:40.277 [2024-07-25 15:11:32.422664] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:40.277 [2024-07-25 15:11:32.422673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.277 [2024-07-25 15:11:32.422680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.277 [2024-07-25 15:11:32.422686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.277 [2024-07-25 15:11:32.422692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.277 [2024-07-25 15:11:32.422740] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:40.277 [2024-07-25 15:11:32.422749] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:40.277 [2024-07-25 15:11:32.423740] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:40.277 [2024-07-25 15:11:32.423780] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:40.277 [2024-07-25 15:11:32.423788] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:40.277 [2024-07-25 15:11:32.424744] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:40.277 [2024-07-25 15:11:32.424756] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:40.277 [2024-07-25 15:11:32.424815] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:40.277 [2024-07-25 15:11:32.428209] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:40.538 are Threshold: 0% 00:15:40.538 Life Percentage Used: 0% 00:15:40.538 Data Units Read: 0 00:15:40.538 Data Units Written: 0 00:15:40.538 Host Read Commands: 0 00:15:40.538 Host Write Commands: 0 00:15:40.538 Controller Busy Time: 0 minutes 00:15:40.538 Power Cycles: 0 00:15:40.539 Power On Hours: 0 hours 00:15:40.539 Unsafe Shutdowns: 0 00:15:40.539 Unrecoverable Media Errors: 0 00:15:40.539 Lifetime Error Log Entries: 0 00:15:40.539 Warning Temperature Time: 0 minutes 00:15:40.539 Critical Temperature Time: 0 minutes 00:15:40.539 00:15:40.539 Number of Queues 00:15:40.539 ================ 00:15:40.539 Number of I/O Submission Queues: 127 00:15:40.539 Number of I/O Completion Queues: 127 00:15:40.539 00:15:40.539 Active Namespaces 00:15:40.539 ================= 00:15:40.539 Namespace ID:1 00:15:40.539 Error Recovery Timeout: Unlimited 00:15:40.539 Command Set Identifier: NVM (00h) 00:15:40.539 Deallocate: Supported 00:15:40.539 Deallocated/Unwritten Error: Not Supported 00:15:40.539 Deallocated Read Value: Unknown 00:15:40.539 Deallocate in Write Zeroes: Not Supported 00:15:40.539 Deallocated Guard Field: 0xFFFF 00:15:40.539 Flush: Supported 00:15:40.539 Reservation: Supported 00:15:40.539 Namespace Sharing Capabilities: Multiple Controllers 00:15:40.539 Size (in LBAs): 131072 (0GiB) 00:15:40.539 Capacity (in LBAs): 131072 (0GiB) 00:15:40.539 Utilization (in LBAs): 131072 (0GiB) 00:15:40.539 NGUID: 11C26054480C4C25AA22C54D1B1CB937 00:15:40.539 UUID: 11c26054-480c-4c25-aa22-c54d1b1cb937 00:15:40.539 Thin Provisioning: Not Supported 00:15:40.539 Per-NS Atomic Units: Yes 00:15:40.539 Atomic Boundary Size (Normal): 0 00:15:40.539 Atomic Boundary Size (PFail): 0 00:15:40.539 Atomic Boundary Offset: 0 00:15:40.539 Maximum Single Source Range Length: 65535 00:15:40.539 Maximum Copy Length: 65535 00:15:40.539 Maximum Source Range Count: 1 00:15:40.539 NGUID/EUI64 Never Reused: No 00:15:40.539 Namespace Write Protected: No 00:15:40.539 Number of LBA Formats: 1 00:15:40.539 Current LBA Format: LBA Format #00 00:15:40.539 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:40.539 00:15:40.539 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:40.539 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.539 [2024-07-25 15:11:32.613850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:45.936 Initializing NVMe Controllers 00:15:45.936 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:45.936 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:45.936 Initialization complete. Launching workers. 00:15:45.936 ======================================================== 00:15:45.936 Latency(us) 00:15:45.936 Device Information : IOPS MiB/s Average min max 00:15:45.936 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40070.87 156.53 3194.02 846.12 6804.52 00:15:45.936 ======================================================== 00:15:45.936 Total : 40070.87 156.53 3194.02 846.12 6804.52 00:15:45.936 00:15:45.936 [2024-07-25 15:11:37.631779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:45.936 15:11:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:45.936 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.936 [2024-07-25 15:11:37.813639] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:51.233 Initializing NVMe Controllers 00:15:51.233 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:51.233 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:51.233 Initialization complete. Launching workers. 00:15:51.233 ======================================================== 00:15:51.233 Latency(us) 00:15:51.233 Device Information : IOPS MiB/s Average min max 00:15:51.233 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16052.78 62.71 7979.22 5982.13 9981.27 00:15:51.233 ======================================================== 00:15:51.233 Total : 16052.78 62.71 7979.22 5982.13 9981.27 00:15:51.233 00:15:51.233 [2024-07-25 15:11:42.855087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:51.233 15:11:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:51.233 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.233 [2024-07-25 15:11:43.047929] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.520 [2024-07-25 15:11:48.128478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.520 Initializing NVMe Controllers 00:15:56.520 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:56.520 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:56.520 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:56.520 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:56.520 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:56.520 Initialization complete. Launching workers. 00:15:56.520 Starting thread on core 2 00:15:56.520 Starting thread on core 3 00:15:56.520 Starting thread on core 1 00:15:56.520 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:56.520 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.520 [2024-07-25 15:11:48.385574] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:59.823 [2024-07-25 15:11:51.444694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:59.823 Initializing NVMe Controllers 00:15:59.823 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.823 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:59.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:59.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:59.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:59.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:59.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:59.823 Initialization complete. Launching workers. 00:15:59.823 Starting thread on core 1 with urgent priority queue 00:15:59.823 Starting thread on core 2 with urgent priority queue 00:15:59.823 Starting thread on core 3 with urgent priority queue 00:15:59.823 Starting thread on core 0 with urgent priority queue 00:15:59.823 SPDK bdev Controller (SPDK1 ) core 0: 10730.00 IO/s 9.32 secs/100000 ios 00:15:59.823 SPDK bdev Controller (SPDK1 ) core 1: 8515.67 IO/s 11.74 secs/100000 ios 00:15:59.823 SPDK bdev Controller (SPDK1 ) core 2: 10241.33 IO/s 9.76 secs/100000 ios 00:15:59.823 SPDK bdev Controller (SPDK1 ) core 3: 8431.67 IO/s 11.86 secs/100000 ios 00:15:59.823 ======================================================== 00:15:59.823 00:15:59.823 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:59.823 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.823 [2024-07-25 15:11:51.707377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:59.823 Initializing NVMe Controllers 00:15:59.823 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.823 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.823 Namespace ID: 1 size: 0GB 00:15:59.823 Initialization complete. 00:15:59.823 INFO: using host memory buffer for IO 00:15:59.823 Hello world! 00:15:59.823 [2024-07-25 15:11:51.743602] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:59.823 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:59.823 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.823 [2024-07-25 15:11:51.997410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.210 Initializing NVMe Controllers 00:16:01.210 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.210 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.210 Initialization complete. Launching workers. 00:16:01.210 submit (in ns) avg, min, max = 8035.5, 3895.8, 4006849.2 00:16:01.210 complete (in ns) avg, min, max = 20084.8, 2373.3, 4035168.3 00:16:01.210 00:16:01.210 Submit histogram 00:16:01.210 ================ 00:16:01.210 Range in us Cumulative Count 00:16:01.210 3.893 - 3.920: 0.9905% ( 189) 00:16:01.210 3.920 - 3.947: 6.7551% ( 1100) 00:16:01.210 3.947 - 3.973: 16.8117% ( 1919) 00:16:01.210 3.973 - 4.000: 28.5347% ( 2237) 00:16:01.210 4.000 - 4.027: 39.9329% ( 2175) 00:16:01.210 4.027 - 4.053: 52.3897% ( 2377) 00:16:01.210 4.053 - 4.080: 69.1594% ( 3200) 00:16:01.210 4.080 - 4.107: 83.7072% ( 2776) 00:16:01.210 4.107 - 4.133: 93.2397% ( 1819) 00:16:01.210 4.133 - 4.160: 97.6418% ( 840) 00:16:01.210 4.160 - 4.187: 98.9571% ( 251) 00:16:01.210 4.187 - 4.213: 99.3659% ( 78) 00:16:01.210 4.213 - 4.240: 99.4340% ( 13) 00:16:01.210 4.240 - 4.267: 99.4602% ( 5) 00:16:01.210 4.267 - 4.293: 99.4707% ( 2) 00:16:01.210 4.320 - 4.347: 99.4759% ( 1) 00:16:01.210 4.453 - 4.480: 99.4812% ( 1) 00:16:01.210 4.693 - 4.720: 99.4864% ( 1) 00:16:01.210 5.387 - 5.413: 99.4917% ( 1) 00:16:01.210 5.547 - 5.573: 99.5021% ( 2) 00:16:01.210 5.813 - 5.840: 99.5074% ( 1) 00:16:01.210 6.053 - 6.080: 99.5126% ( 1) 00:16:01.210 6.107 - 6.133: 99.5179% ( 1) 00:16:01.210 6.133 - 6.160: 99.5231% ( 1) 00:16:01.210 6.160 - 6.187: 99.5284% ( 1) 00:16:01.210 6.187 - 6.213: 99.5336% ( 1) 00:16:01.210 6.213 - 6.240: 99.5493% ( 3) 00:16:01.210 6.267 - 6.293: 99.5546% ( 1) 00:16:01.210 6.320 - 6.347: 99.5598% ( 1) 00:16:01.210 6.347 - 6.373: 99.5650% ( 1) 00:16:01.211 6.373 - 6.400: 99.5703% ( 1) 00:16:01.211 6.507 - 6.533: 99.5755% ( 1) 00:16:01.211 6.533 - 6.560: 99.5808% ( 1) 00:16:01.211 6.613 - 6.640: 99.5860% ( 1) 00:16:01.211 6.880 - 6.933: 99.5912% ( 1) 00:16:01.211 7.040 - 7.093: 99.5965% ( 1) 00:16:01.211 7.093 - 7.147: 99.6070% ( 2) 00:16:01.211 7.200 - 7.253: 99.6122% ( 1) 00:16:01.211 7.253 - 7.307: 99.6227% ( 2) 00:16:01.211 7.360 - 7.413: 99.6279% ( 1) 00:16:01.211 7.467 - 7.520: 99.6332% ( 1) 00:16:01.211 7.520 - 7.573: 99.6541% ( 4) 00:16:01.211 7.573 - 7.627: 99.6646% ( 2) 00:16:01.211 7.627 - 7.680: 99.6698% ( 1) 00:16:01.211 7.680 - 7.733: 99.6751% ( 1) 00:16:01.211 7.733 - 7.787: 99.6908% ( 3) 00:16:01.211 7.787 - 7.840: 99.7065% ( 3) 00:16:01.211 7.840 - 7.893: 99.7170% ( 2) 00:16:01.211 7.893 - 7.947: 99.7275% ( 2) 00:16:01.211 7.947 - 8.000: 99.7327% ( 1) 00:16:01.211 8.053 - 8.107: 99.7380% ( 1) 00:16:01.211 8.107 - 8.160: 99.7485% ( 2) 00:16:01.211 8.160 - 8.213: 99.7537% ( 1) 00:16:01.211 8.213 - 8.267: 99.7642% ( 2) 00:16:01.211 8.267 - 8.320: 99.7694% ( 1) 00:16:01.211 8.320 - 8.373: 99.7799% ( 2) 00:16:01.211 8.373 - 8.427: 99.7851% ( 1) 00:16:01.211 8.427 - 8.480: 99.7956% ( 2) 00:16:01.211 8.480 - 8.533: 99.8009% ( 1) 00:16:01.211 8.533 - 8.587: 99.8061% ( 1) 00:16:01.211 8.587 - 8.640: 99.8271% ( 4) 00:16:01.211 8.640 - 8.693: 99.8375% ( 2) 00:16:01.211 8.800 - 8.853: 99.8428% ( 1) 00:16:01.211 8.853 - 8.907: 99.8480% ( 1) 00:16:01.211 8.907 - 8.960: 99.8533% ( 1) 00:16:01.211 8.960 - 9.013: 99.8637% ( 2) 00:16:01.211 9.013 - 9.067: 99.8690% ( 1) 00:16:01.211 9.067 - 9.120: 99.8742% ( 1) 00:16:01.211 9.280 - 9.333: 99.8795% ( 1) 00:16:01.211 9.493 - 9.547: 99.8952% ( 3) 00:16:01.211 16.213 - 16.320: 99.9004% ( 1) 00:16:01.211 3986.773 - 4014.080: 100.0000% ( 19) 00:16:01.211 00:16:01.211 Complete histogram 00:16:01.211 ================== 00:16:01.211 Range in us Cumulative Count 00:16:01.211 2.373 - 2.387: 0.0105% ( 2) 00:16:01.211 2.387 - 2.400: 0.0576% ( 9) 00:16:01.211 2.400 - 2.413: 1.0586% ( 191) 00:16:01.211 2.413 - 2.427: 1.1634% ( 20) 00:16:01.211 2.427 - 2.440: 1.2944% ( 25) 00:16:01.211 2.440 - [2024-07-25 15:11:53.019061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.211 2.453: 30.3480% ( 5544) 00:16:01.211 2.453 - 2.467: 61.1361% ( 5875) 00:16:01.211 2.467 - 2.480: 70.8940% ( 1862) 00:16:01.211 2.480 - 2.493: 79.6981% ( 1680) 00:16:01.211 2.493 - 2.507: 81.9463% ( 429) 00:16:01.211 2.507 - 2.520: 83.9325% ( 379) 00:16:01.211 2.520 - 2.533: 90.3731% ( 1229) 00:16:01.211 2.533 - 2.547: 95.3778% ( 955) 00:16:01.211 2.547 - 2.560: 97.7046% ( 444) 00:16:01.211 2.560 - 2.573: 98.8314% ( 215) 00:16:01.211 2.573 - 2.587: 99.1982% ( 70) 00:16:01.211 2.587 - 2.600: 99.3030% ( 20) 00:16:01.211 2.600 - 2.613: 99.3240% ( 4) 00:16:01.211 2.627 - 2.640: 99.3292% ( 1) 00:16:01.211 4.480 - 4.507: 99.3345% ( 1) 00:16:01.211 4.507 - 4.533: 99.3397% ( 1) 00:16:01.211 4.587 - 4.613: 99.3449% ( 1) 00:16:01.211 4.720 - 4.747: 99.3502% ( 1) 00:16:01.211 4.747 - 4.773: 99.3816% ( 6) 00:16:01.211 4.853 - 4.880: 99.3869% ( 1) 00:16:01.211 4.933 - 4.960: 99.3921% ( 1) 00:16:01.211 4.987 - 5.013: 99.3973% ( 1) 00:16:01.211 5.040 - 5.067: 99.4026% ( 1) 00:16:01.211 5.067 - 5.093: 99.4078% ( 1) 00:16:01.211 5.200 - 5.227: 99.4131% ( 1) 00:16:01.211 5.227 - 5.253: 99.4288% ( 3) 00:16:01.211 5.360 - 5.387: 99.4340% ( 1) 00:16:01.211 5.520 - 5.547: 99.4393% ( 1) 00:16:01.211 5.760 - 5.787: 99.4445% ( 1) 00:16:01.211 5.813 - 5.840: 99.4497% ( 1) 00:16:01.211 5.867 - 5.893: 99.4550% ( 1) 00:16:01.211 5.947 - 5.973: 99.4602% ( 1) 00:16:01.211 6.160 - 6.187: 99.4655% ( 1) 00:16:01.211 6.187 - 6.213: 99.4707% ( 1) 00:16:01.211 6.373 - 6.400: 99.4759% ( 1) 00:16:01.211 6.427 - 6.453: 99.4812% ( 1) 00:16:01.211 6.480 - 6.507: 99.4917% ( 2) 00:16:01.211 6.507 - 6.533: 99.4969% ( 1) 00:16:01.211 6.533 - 6.560: 99.5074% ( 2) 00:16:01.211 6.560 - 6.587: 99.5126% ( 1) 00:16:01.211 6.667 - 6.693: 99.5179% ( 1) 00:16:01.211 6.720 - 6.747: 99.5231% ( 1) 00:16:01.211 6.747 - 6.773: 99.5284% ( 1) 00:16:01.211 6.880 - 6.933: 99.5336% ( 1) 00:16:01.211 6.933 - 6.987: 99.5388% ( 1) 00:16:01.211 7.200 - 7.253: 99.5441% ( 1) 00:16:01.211 7.253 - 7.307: 99.5546% ( 2) 00:16:01.211 149.333 - 150.187: 99.5598% ( 1) 00:16:01.211 3986.773 - 4014.080: 99.9843% ( 81) 00:16:01.211 4014.080 - 4041.387: 100.0000% ( 3) 00:16:01.211 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:01.211 [ 00:16:01.211 { 00:16:01.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:01.211 "subtype": "Discovery", 00:16:01.211 "listen_addresses": [], 00:16:01.211 "allow_any_host": true, 00:16:01.211 "hosts": [] 00:16:01.211 }, 00:16:01.211 { 00:16:01.211 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:01.211 "subtype": "NVMe", 00:16:01.211 "listen_addresses": [ 00:16:01.211 { 00:16:01.211 "trtype": "VFIOUSER", 00:16:01.211 "adrfam": "IPv4", 00:16:01.211 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:01.211 "trsvcid": "0" 00:16:01.211 } 00:16:01.211 ], 00:16:01.211 "allow_any_host": true, 00:16:01.211 "hosts": [], 00:16:01.211 "serial_number": "SPDK1", 00:16:01.211 "model_number": "SPDK bdev Controller", 00:16:01.211 "max_namespaces": 32, 00:16:01.211 "min_cntlid": 1, 00:16:01.211 "max_cntlid": 65519, 00:16:01.211 "namespaces": [ 00:16:01.211 { 00:16:01.211 "nsid": 1, 00:16:01.211 "bdev_name": "Malloc1", 00:16:01.211 "name": "Malloc1", 00:16:01.211 "nguid": "11C26054480C4C25AA22C54D1B1CB937", 00:16:01.211 "uuid": "11c26054-480c-4c25-aa22-c54d1b1cb937" 00:16:01.211 } 00:16:01.211 ] 00:16:01.211 }, 00:16:01.211 { 00:16:01.211 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:01.211 "subtype": "NVMe", 00:16:01.211 "listen_addresses": [ 00:16:01.211 { 00:16:01.211 "trtype": "VFIOUSER", 00:16:01.211 "adrfam": "IPv4", 00:16:01.211 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:01.211 "trsvcid": "0" 00:16:01.211 } 00:16:01.211 ], 00:16:01.211 "allow_any_host": true, 00:16:01.211 "hosts": [], 00:16:01.211 "serial_number": "SPDK2", 00:16:01.211 "model_number": "SPDK bdev Controller", 00:16:01.211 "max_namespaces": 32, 00:16:01.211 "min_cntlid": 1, 00:16:01.211 "max_cntlid": 65519, 00:16:01.211 "namespaces": [ 00:16:01.211 { 00:16:01.211 "nsid": 1, 00:16:01.211 "bdev_name": "Malloc2", 00:16:01.211 "name": "Malloc2", 00:16:01.211 "nguid": "C69A51830CD143FF9BBD9AEE27A1B093", 00:16:01.211 "uuid": "c69a5183-0cd1-43ff-9bbd-9aee27a1b093" 00:16:01.211 } 00:16:01.211 ] 00:16:01.211 } 00:16:01.211 ] 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=222965 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:01.211 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:01.212 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:01.212 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.472 Malloc3 00:16:01.472 [2024-07-25 15:11:53.409719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.472 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:01.472 [2024-07-25 15:11:53.579841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.472 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:01.472 Asynchronous Event Request test 00:16:01.472 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.472 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.472 Registering asynchronous event callbacks... 00:16:01.472 Starting namespace attribute notice tests for all controllers... 00:16:01.472 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:01.472 aer_cb - Changed Namespace 00:16:01.472 Cleaning up... 00:16:01.733 [ 00:16:01.733 { 00:16:01.733 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:01.733 "subtype": "Discovery", 00:16:01.733 "listen_addresses": [], 00:16:01.733 "allow_any_host": true, 00:16:01.733 "hosts": [] 00:16:01.733 }, 00:16:01.733 { 00:16:01.733 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:01.733 "subtype": "NVMe", 00:16:01.733 "listen_addresses": [ 00:16:01.733 { 00:16:01.733 "trtype": "VFIOUSER", 00:16:01.733 "adrfam": "IPv4", 00:16:01.733 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:01.733 "trsvcid": "0" 00:16:01.733 } 00:16:01.733 ], 00:16:01.733 "allow_any_host": true, 00:16:01.733 "hosts": [], 00:16:01.733 "serial_number": "SPDK1", 00:16:01.733 "model_number": "SPDK bdev Controller", 00:16:01.733 "max_namespaces": 32, 00:16:01.733 "min_cntlid": 1, 00:16:01.733 "max_cntlid": 65519, 00:16:01.733 "namespaces": [ 00:16:01.733 { 00:16:01.733 "nsid": 1, 00:16:01.733 "bdev_name": "Malloc1", 00:16:01.733 "name": "Malloc1", 00:16:01.733 "nguid": "11C26054480C4C25AA22C54D1B1CB937", 00:16:01.733 "uuid": "11c26054-480c-4c25-aa22-c54d1b1cb937" 00:16:01.733 }, 00:16:01.733 { 00:16:01.733 "nsid": 2, 00:16:01.733 "bdev_name": "Malloc3", 00:16:01.733 "name": "Malloc3", 00:16:01.733 "nguid": "B899FB9ADABF4666A1320B7283FC970B", 00:16:01.733 "uuid": "b899fb9a-dabf-4666-a132-0b7283fc970b" 00:16:01.733 } 00:16:01.733 ] 00:16:01.733 }, 00:16:01.733 { 00:16:01.733 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:01.733 "subtype": "NVMe", 00:16:01.734 "listen_addresses": [ 00:16:01.734 { 00:16:01.734 "trtype": "VFIOUSER", 00:16:01.734 "adrfam": "IPv4", 00:16:01.734 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:01.734 "trsvcid": "0" 00:16:01.734 } 00:16:01.734 ], 00:16:01.734 "allow_any_host": true, 00:16:01.734 "hosts": [], 00:16:01.734 "serial_number": "SPDK2", 00:16:01.734 "model_number": "SPDK bdev Controller", 00:16:01.734 "max_namespaces": 32, 00:16:01.734 "min_cntlid": 1, 00:16:01.734 "max_cntlid": 65519, 00:16:01.734 "namespaces": [ 00:16:01.734 { 00:16:01.734 "nsid": 1, 00:16:01.734 "bdev_name": "Malloc2", 00:16:01.734 "name": "Malloc2", 00:16:01.734 "nguid": "C69A51830CD143FF9BBD9AEE27A1B093", 00:16:01.734 "uuid": "c69a5183-0cd1-43ff-9bbd-9aee27a1b093" 00:16:01.734 } 00:16:01.734 ] 00:16:01.734 } 00:16:01.734 ] 00:16:01.734 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 222965 00:16:01.734 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:01.734 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:01.734 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:01.734 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:01.734 [2024-07-25 15:11:53.798293] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:01.734 [2024-07-25 15:11:53.798336] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222971 ] 00:16:01.734 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.734 [2024-07-25 15:11:53.831756] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:01.734 [2024-07-25 15:11:53.838888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:01.734 [2024-07-25 15:11:53.838910] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f00a38f4000 00:16:01.734 [2024-07-25 15:11:53.839886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.734 [2024-07-25 15:11:53.840893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.734 [2024-07-25 15:11:53.841896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.734 [2024-07-25 15:11:53.842898] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.734 [2024-07-25 15:11:53.843904] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.734 [2024-07-25 15:11:53.844914] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.734 [2024-07-25 15:11:53.845918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.734 [2024-07-25 15:11:53.846927] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.734 [2024-07-25 15:11:53.847933] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:01.734 [2024-07-25 15:11:53.847943] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f00a38e9000 00:16:01.734 [2024-07-25 15:11:53.849272] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:01.734 [2024-07-25 15:11:53.868362] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:01.734 [2024-07-25 15:11:53.868384] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:01.734 [2024-07-25 15:11:53.873457] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:01.734 [2024-07-25 15:11:53.873501] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:01.734 [2024-07-25 15:11:53.873583] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:01.734 [2024-07-25 15:11:53.873595] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:01.734 [2024-07-25 15:11:53.873601] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:01.734 [2024-07-25 15:11:53.874466] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:01.734 [2024-07-25 15:11:53.874479] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:01.734 [2024-07-25 15:11:53.874486] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:01.734 [2024-07-25 15:11:53.875469] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:01.734 [2024-07-25 15:11:53.875478] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:01.734 [2024-07-25 15:11:53.875489] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:01.734 [2024-07-25 15:11:53.876475] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:01.734 [2024-07-25 15:11:53.876484] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:01.734 [2024-07-25 15:11:53.877477] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:01.734 [2024-07-25 15:11:53.877486] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:01.734 [2024-07-25 15:11:53.877491] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:01.734 [2024-07-25 15:11:53.877497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:01.734 [2024-07-25 15:11:53.877603] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:01.734 [2024-07-25 15:11:53.877607] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:01.734 [2024-07-25 15:11:53.877612] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:01.734 [2024-07-25 15:11:53.878482] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:01.734 [2024-07-25 15:11:53.879482] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:01.734 [2024-07-25 15:11:53.880495] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:01.734 [2024-07-25 15:11:53.881498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:01.734 [2024-07-25 15:11:53.881538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:01.734 [2024-07-25 15:11:53.882510] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:01.734 [2024-07-25 15:11:53.882518] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:01.734 [2024-07-25 15:11:53.882523] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:01.734 [2024-07-25 15:11:53.882544] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:01.734 [2024-07-25 15:11:53.882555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:01.734 [2024-07-25 15:11:53.882567] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.734 [2024-07-25 15:11:53.882572] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.734 [2024-07-25 15:11:53.882576] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.734 [2024-07-25 15:11:53.882589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.734 [2024-07-25 15:11:53.890209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:01.734 [2024-07-25 15:11:53.890223] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:01.734 [2024-07-25 15:11:53.890228] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:01.734 [2024-07-25 15:11:53.890233] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:01.734 [2024-07-25 15:11:53.890238] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:01.735 [2024-07-25 15:11:53.890242] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:01.735 [2024-07-25 15:11:53.890247] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:01.735 [2024-07-25 15:11:53.890251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.890259] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.890271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:01.735 [2024-07-25 15:11:53.898211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:01.735 [2024-07-25 15:11:53.898226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.735 [2024-07-25 15:11:53.898235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.735 [2024-07-25 15:11:53.898244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.735 [2024-07-25 15:11:53.898252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.735 [2024-07-25 15:11:53.898257] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.898265] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.898274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:01.735 [2024-07-25 15:11:53.906205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:01.735 [2024-07-25 15:11:53.906213] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:01.735 [2024-07-25 15:11:53.906218] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.906227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.906232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.906241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:01.735 [2024-07-25 15:11:53.914208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:01.735 [2024-07-25 15:11:53.914273] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.914283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.914291] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:01.735 [2024-07-25 15:11:53.914295] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:01.735 [2024-07-25 15:11:53.914299] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.735 [2024-07-25 15:11:53.914305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:01.735 [2024-07-25 15:11:53.922206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:01.735 [2024-07-25 15:11:53.922217] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:01.735 [2024-07-25 15:11:53.922226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.922234] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:01.735 [2024-07-25 15:11:53.922241] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.735 [2024-07-25 15:11:53.922245] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.735 [2024-07-25 15:11:53.922248] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.735 [2024-07-25 15:11:53.922254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.997 [2024-07-25 15:11:53.930207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:01.997 [2024-07-25 15:11:53.930221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:01.997 [2024-07-25 15:11:53.930229] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:01.997 [2024-07-25 15:11:53.930236] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.997 [2024-07-25 15:11:53.930240] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.997 [2024-07-25 15:11:53.930244] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.997 [2024-07-25 15:11:53.930250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.997 [2024-07-25 15:11:53.938206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:01.997 [2024-07-25 15:11:53.938216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:01.997 [2024-07-25 15:11:53.938222] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:01.997 [2024-07-25 15:11:53.938232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:01.997 [2024-07-25 15:11:53.938239] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:01.997 [2024-07-25 15:11:53.938244] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:01.997 [2024-07-25 15:11:53.938251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:01.997 [2024-07-25 15:11:53.938256] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:01.997 [2024-07-25 15:11:53.938261] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:01.997 [2024-07-25 15:11:53.938266] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:01.997 [2024-07-25 15:11:53.938281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:01.997 [2024-07-25 15:11:53.946206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:01.997 [2024-07-25 15:11:53.946219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:01.997 [2024-07-25 15:11:53.954207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:01.997 [2024-07-25 15:11:53.954220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:01.997 [2024-07-25 15:11:53.962206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:01.997 [2024-07-25 15:11:53.962219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:01.997 [2024-07-25 15:11:53.970207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:01.997 [2024-07-25 15:11:53.970223] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:01.997 [2024-07-25 15:11:53.970228] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:01.997 [2024-07-25 15:11:53.970232] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:01.997 [2024-07-25 15:11:53.970235] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:01.997 [2024-07-25 15:11:53.970239] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:01.997 [2024-07-25 15:11:53.970245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:01.997 [2024-07-25 15:11:53.970252] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:01.997 [2024-07-25 15:11:53.970257] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:01.997 [2024-07-25 15:11:53.970260] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.997 [2024-07-25 15:11:53.970266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:01.997 [2024-07-25 15:11:53.970273] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:01.997 [2024-07-25 15:11:53.970278] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.997 [2024-07-25 15:11:53.970281] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.997 [2024-07-25 15:11:53.970287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.997 [2024-07-25 15:11:53.970295] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:01.997 [2024-07-25 15:11:53.970299] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:01.997 [2024-07-25 15:11:53.970304] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.997 [2024-07-25 15:11:53.970310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:01.997 [2024-07-25 15:11:53.978361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:01.997 [2024-07-25 15:11:53.978379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:01.997 [2024-07-25 15:11:53.978434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:01.997 [2024-07-25 15:11:53.978442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:01.997 ===================================================== 00:16:01.997 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:01.997 ===================================================== 00:16:01.997 Controller Capabilities/Features 00:16:01.997 ================================ 00:16:01.997 Vendor ID: 4e58 00:16:01.997 Subsystem Vendor ID: 4e58 00:16:01.997 Serial Number: SPDK2 00:16:01.997 Model Number: SPDK bdev Controller 00:16:01.997 Firmware Version: 24.09 00:16:01.997 Recommended Arb Burst: 6 00:16:01.997 IEEE OUI Identifier: 8d 6b 50 00:16:01.997 Multi-path I/O 00:16:01.997 May have multiple subsystem ports: Yes 00:16:01.997 May have multiple controllers: Yes 00:16:01.997 Associated with SR-IOV VF: No 00:16:01.997 Max Data Transfer Size: 131072 00:16:01.997 Max Number of Namespaces: 32 00:16:01.997 Max Number of I/O Queues: 127 00:16:01.997 NVMe Specification Version (VS): 1.3 00:16:01.997 NVMe Specification Version (Identify): 1.3 00:16:01.997 Maximum Queue Entries: 256 00:16:01.997 Contiguous Queues Required: Yes 00:16:01.997 Arbitration Mechanisms Supported 00:16:01.997 Weighted Round Robin: Not Supported 00:16:01.997 Vendor Specific: Not Supported 00:16:01.997 Reset Timeout: 15000 ms 00:16:01.997 Doorbell Stride: 4 bytes 00:16:01.997 NVM Subsystem Reset: Not Supported 00:16:01.997 Command Sets Supported 00:16:01.997 NVM Command Set: Supported 00:16:01.997 Boot Partition: Not Supported 00:16:01.997 Memory Page Size Minimum: 4096 bytes 00:16:01.997 Memory Page Size Maximum: 4096 bytes 00:16:01.997 Persistent Memory Region: Not Supported 00:16:01.997 Optional Asynchronous Events Supported 00:16:01.997 Namespace Attribute Notices: Supported 00:16:01.997 Firmware Activation Notices: Not Supported 00:16:01.997 ANA Change Notices: Not Supported 00:16:01.997 PLE Aggregate Log Change Notices: Not Supported 00:16:01.997 LBA Status Info Alert Notices: Not Supported 00:16:01.997 EGE Aggregate Log Change Notices: Not Supported 00:16:01.997 Normal NVM Subsystem Shutdown event: Not Supported 00:16:01.997 Zone Descriptor Change Notices: Not Supported 00:16:01.997 Discovery Log Change Notices: Not Supported 00:16:01.997 Controller Attributes 00:16:01.997 128-bit Host Identifier: Supported 00:16:01.997 Non-Operational Permissive Mode: Not Supported 00:16:01.997 NVM Sets: Not Supported 00:16:01.997 Read Recovery Levels: Not Supported 00:16:01.997 Endurance Groups: Not Supported 00:16:01.997 Predictable Latency Mode: Not Supported 00:16:01.997 Traffic Based Keep ALive: Not Supported 00:16:01.997 Namespace Granularity: Not Supported 00:16:01.997 SQ Associations: Not Supported 00:16:01.997 UUID List: Not Supported 00:16:01.997 Multi-Domain Subsystem: Not Supported 00:16:01.997 Fixed Capacity Management: Not Supported 00:16:01.997 Variable Capacity Management: Not Supported 00:16:01.997 Delete Endurance Group: Not Supported 00:16:01.997 Delete NVM Set: Not Supported 00:16:01.997 Extended LBA Formats Supported: Not Supported 00:16:01.997 Flexible Data Placement Supported: Not Supported 00:16:01.997 00:16:01.997 Controller Memory Buffer Support 00:16:01.998 ================================ 00:16:01.998 Supported: No 00:16:01.998 00:16:01.998 Persistent Memory Region Support 00:16:01.998 ================================ 00:16:01.998 Supported: No 00:16:01.998 00:16:01.998 Admin Command Set Attributes 00:16:01.998 ============================ 00:16:01.998 Security Send/Receive: Not Supported 00:16:01.998 Format NVM: Not Supported 00:16:01.998 Firmware Activate/Download: Not Supported 00:16:01.998 Namespace Management: Not Supported 00:16:01.998 Device Self-Test: Not Supported 00:16:01.998 Directives: Not Supported 00:16:01.998 NVMe-MI: Not Supported 00:16:01.998 Virtualization Management: Not Supported 00:16:01.998 Doorbell Buffer Config: Not Supported 00:16:01.998 Get LBA Status Capability: Not Supported 00:16:01.998 Command & Feature Lockdown Capability: Not Supported 00:16:01.998 Abort Command Limit: 4 00:16:01.998 Async Event Request Limit: 4 00:16:01.998 Number of Firmware Slots: N/A 00:16:01.998 Firmware Slot 1 Read-Only: N/A 00:16:01.998 Firmware Activation Without Reset: N/A 00:16:01.998 Multiple Update Detection Support: N/A 00:16:01.998 Firmware Update Granularity: No Information Provided 00:16:01.998 Per-Namespace SMART Log: No 00:16:01.998 Asymmetric Namespace Access Log Page: Not Supported 00:16:01.998 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:01.998 Command Effects Log Page: Supported 00:16:01.998 Get Log Page Extended Data: Supported 00:16:01.998 Telemetry Log Pages: Not Supported 00:16:01.998 Persistent Event Log Pages: Not Supported 00:16:01.998 Supported Log Pages Log Page: May Support 00:16:01.998 Commands Supported & Effects Log Page: Not Supported 00:16:01.998 Feature Identifiers & Effects Log Page:May Support 00:16:01.998 NVMe-MI Commands & Effects Log Page: May Support 00:16:01.998 Data Area 4 for Telemetry Log: Not Supported 00:16:01.998 Error Log Page Entries Supported: 128 00:16:01.998 Keep Alive: Supported 00:16:01.998 Keep Alive Granularity: 10000 ms 00:16:01.998 00:16:01.998 NVM Command Set Attributes 00:16:01.998 ========================== 00:16:01.998 Submission Queue Entry Size 00:16:01.998 Max: 64 00:16:01.998 Min: 64 00:16:01.998 Completion Queue Entry Size 00:16:01.998 Max: 16 00:16:01.998 Min: 16 00:16:01.998 Number of Namespaces: 32 00:16:01.998 Compare Command: Supported 00:16:01.998 Write Uncorrectable Command: Not Supported 00:16:01.998 Dataset Management Command: Supported 00:16:01.998 Write Zeroes Command: Supported 00:16:01.998 Set Features Save Field: Not Supported 00:16:01.998 Reservations: Not Supported 00:16:01.998 Timestamp: Not Supported 00:16:01.998 Copy: Supported 00:16:01.998 Volatile Write Cache: Present 00:16:01.998 Atomic Write Unit (Normal): 1 00:16:01.998 Atomic Write Unit (PFail): 1 00:16:01.998 Atomic Compare & Write Unit: 1 00:16:01.998 Fused Compare & Write: Supported 00:16:01.998 Scatter-Gather List 00:16:01.998 SGL Command Set: Supported (Dword aligned) 00:16:01.998 SGL Keyed: Not Supported 00:16:01.998 SGL Bit Bucket Descriptor: Not Supported 00:16:01.998 SGL Metadata Pointer: Not Supported 00:16:01.998 Oversized SGL: Not Supported 00:16:01.998 SGL Metadata Address: Not Supported 00:16:01.998 SGL Offset: Not Supported 00:16:01.998 Transport SGL Data Block: Not Supported 00:16:01.998 Replay Protected Memory Block: Not Supported 00:16:01.998 00:16:01.998 Firmware Slot Information 00:16:01.998 ========================= 00:16:01.998 Active slot: 1 00:16:01.998 Slot 1 Firmware Revision: 24.09 00:16:01.998 00:16:01.998 00:16:01.998 Commands Supported and Effects 00:16:01.998 ============================== 00:16:01.998 Admin Commands 00:16:01.998 -------------- 00:16:01.998 Get Log Page (02h): Supported 00:16:01.998 Identify (06h): Supported 00:16:01.998 Abort (08h): Supported 00:16:01.998 Set Features (09h): Supported 00:16:01.998 Get Features (0Ah): Supported 00:16:01.998 Asynchronous Event Request (0Ch): Supported 00:16:01.998 Keep Alive (18h): Supported 00:16:01.998 I/O Commands 00:16:01.998 ------------ 00:16:01.998 Flush (00h): Supported LBA-Change 00:16:01.998 Write (01h): Supported LBA-Change 00:16:01.998 Read (02h): Supported 00:16:01.998 Compare (05h): Supported 00:16:01.998 Write Zeroes (08h): Supported LBA-Change 00:16:01.998 Dataset Management (09h): Supported LBA-Change 00:16:01.998 Copy (19h): Supported LBA-Change 00:16:01.998 00:16:01.998 Error Log 00:16:01.998 ========= 00:16:01.998 00:16:01.998 Arbitration 00:16:01.998 =========== 00:16:01.998 Arbitration Burst: 1 00:16:01.998 00:16:01.998 Power Management 00:16:01.998 ================ 00:16:01.998 Number of Power States: 1 00:16:01.998 Current Power State: Power State #0 00:16:01.998 Power State #0: 00:16:01.998 Max Power: 0.00 W 00:16:01.998 Non-Operational State: Operational 00:16:01.998 Entry Latency: Not Reported 00:16:01.998 Exit Latency: Not Reported 00:16:01.998 Relative Read Throughput: 0 00:16:01.998 Relative Read Latency: 0 00:16:01.998 Relative Write Throughput: 0 00:16:01.998 Relative Write Latency: 0 00:16:01.998 Idle Power: Not Reported 00:16:01.998 Active Power: Not Reported 00:16:01.998 Non-Operational Permissive Mode: Not Supported 00:16:01.998 00:16:01.998 Health Information 00:16:01.998 ================== 00:16:01.998 Critical Warnings: 00:16:01.998 Available Spare Space: OK 00:16:01.998 Temperature: OK 00:16:01.998 Device Reliability: OK 00:16:01.998 Read Only: No 00:16:01.998 Volatile Memory Backup: OK 00:16:01.998 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:01.998 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:01.998 Available Spare: 0% 00:16:01.998 Available Sp[2024-07-25 15:11:53.978541] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:01.998 [2024-07-25 15:11:53.986208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:01.998 [2024-07-25 15:11:53.986238] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:01.998 [2024-07-25 15:11:53.986248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.998 [2024-07-25 15:11:53.986254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.998 [2024-07-25 15:11:53.986261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.998 [2024-07-25 15:11:53.986267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.998 [2024-07-25 15:11:53.986310] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:01.998 [2024-07-25 15:11:53.986320] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:01.998 [2024-07-25 15:11:53.987311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:01.998 [2024-07-25 15:11:53.987358] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:01.998 [2024-07-25 15:11:53.987365] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:01.998 [2024-07-25 15:11:53.988316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:01.998 [2024-07-25 15:11:53.988328] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:01.998 [2024-07-25 15:11:53.988378] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:01.998 [2024-07-25 15:11:53.989752] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:01.998 are Threshold: 0% 00:16:01.998 Life Percentage Used: 0% 00:16:01.998 Data Units Read: 0 00:16:01.998 Data Units Written: 0 00:16:01.998 Host Read Commands: 0 00:16:01.998 Host Write Commands: 0 00:16:01.998 Controller Busy Time: 0 minutes 00:16:01.998 Power Cycles: 0 00:16:01.998 Power On Hours: 0 hours 00:16:01.998 Unsafe Shutdowns: 0 00:16:01.998 Unrecoverable Media Errors: 0 00:16:01.998 Lifetime Error Log Entries: 0 00:16:01.998 Warning Temperature Time: 0 minutes 00:16:01.998 Critical Temperature Time: 0 minutes 00:16:01.998 00:16:01.998 Number of Queues 00:16:01.998 ================ 00:16:01.998 Number of I/O Submission Queues: 127 00:16:01.998 Number of I/O Completion Queues: 127 00:16:01.998 00:16:01.998 Active Namespaces 00:16:01.998 ================= 00:16:01.998 Namespace ID:1 00:16:01.998 Error Recovery Timeout: Unlimited 00:16:01.998 Command Set Identifier: NVM (00h) 00:16:01.998 Deallocate: Supported 00:16:01.998 Deallocated/Unwritten Error: Not Supported 00:16:01.999 Deallocated Read Value: Unknown 00:16:01.999 Deallocate in Write Zeroes: Not Supported 00:16:01.999 Deallocated Guard Field: 0xFFFF 00:16:01.999 Flush: Supported 00:16:01.999 Reservation: Supported 00:16:01.999 Namespace Sharing Capabilities: Multiple Controllers 00:16:01.999 Size (in LBAs): 131072 (0GiB) 00:16:01.999 Capacity (in LBAs): 131072 (0GiB) 00:16:01.999 Utilization (in LBAs): 131072 (0GiB) 00:16:01.999 NGUID: C69A51830CD143FF9BBD9AEE27A1B093 00:16:01.999 UUID: c69a5183-0cd1-43ff-9bbd-9aee27a1b093 00:16:01.999 Thin Provisioning: Not Supported 00:16:01.999 Per-NS Atomic Units: Yes 00:16:01.999 Atomic Boundary Size (Normal): 0 00:16:01.999 Atomic Boundary Size (PFail): 0 00:16:01.999 Atomic Boundary Offset: 0 00:16:01.999 Maximum Single Source Range Length: 65535 00:16:01.999 Maximum Copy Length: 65535 00:16:01.999 Maximum Source Range Count: 1 00:16:01.999 NGUID/EUI64 Never Reused: No 00:16:01.999 Namespace Write Protected: No 00:16:01.999 Number of LBA Formats: 1 00:16:01.999 Current LBA Format: LBA Format #00 00:16:01.999 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:01.999 00:16:01.999 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:01.999 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.999 [2024-07-25 15:11:54.174235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:07.293 Initializing NVMe Controllers 00:16:07.293 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:07.293 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:07.293 Initialization complete. Launching workers. 00:16:07.293 ======================================================== 00:16:07.293 Latency(us) 00:16:07.293 Device Information : IOPS MiB/s Average min max 00:16:07.293 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39983.40 156.19 3203.71 840.15 6811.24 00:16:07.293 ======================================================== 00:16:07.293 Total : 39983.40 156.19 3203.71 840.15 6811.24 00:16:07.293 00:16:07.293 [2024-07-25 15:11:59.281391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.293 15:11:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:07.293 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.293 [2024-07-25 15:11:59.460959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.585 Initializing NVMe Controllers 00:16:12.585 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:12.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:12.585 Initialization complete. Launching workers. 00:16:12.585 ======================================================== 00:16:12.585 Latency(us) 00:16:12.585 Device Information : IOPS MiB/s Average min max 00:16:12.585 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32935.37 128.65 3885.82 1103.84 9851.83 00:16:12.585 ======================================================== 00:16:12.585 Total : 32935.37 128.65 3885.82 1103.84 9851.83 00:16:12.585 00:16:12.585 [2024-07-25 15:12:04.477209] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.585 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:12.585 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.585 [2024-07-25 15:12:04.666623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:17.880 [2024-07-25 15:12:09.810287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.880 Initializing NVMe Controllers 00:16:17.880 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:17.880 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:17.880 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:17.880 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:17.880 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:17.880 Initialization complete. Launching workers. 00:16:17.880 Starting thread on core 2 00:16:17.880 Starting thread on core 3 00:16:17.880 Starting thread on core 1 00:16:17.880 15:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:17.880 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.880 [2024-07-25 15:12:10.065698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:21.182 [2024-07-25 15:12:13.118625] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:21.182 Initializing NVMe Controllers 00:16:21.182 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.182 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.182 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:21.182 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:21.182 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:21.182 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:21.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:21.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:21.182 Initialization complete. Launching workers. 00:16:21.182 Starting thread on core 1 with urgent priority queue 00:16:21.182 Starting thread on core 2 with urgent priority queue 00:16:21.182 Starting thread on core 3 with urgent priority queue 00:16:21.182 Starting thread on core 0 with urgent priority queue 00:16:21.182 SPDK bdev Controller (SPDK2 ) core 0: 14776.67 IO/s 6.77 secs/100000 ios 00:16:21.182 SPDK bdev Controller (SPDK2 ) core 1: 7583.33 IO/s 13.19 secs/100000 ios 00:16:21.182 SPDK bdev Controller (SPDK2 ) core 2: 8865.00 IO/s 11.28 secs/100000 ios 00:16:21.182 SPDK bdev Controller (SPDK2 ) core 3: 12067.00 IO/s 8.29 secs/100000 ios 00:16:21.182 ======================================================== 00:16:21.182 00:16:21.182 15:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:21.182 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.443 [2024-07-25 15:12:13.381588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:21.443 Initializing NVMe Controllers 00:16:21.443 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.443 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.443 Namespace ID: 1 size: 0GB 00:16:21.443 Initialization complete. 00:16:21.443 INFO: using host memory buffer for IO 00:16:21.443 Hello world! 00:16:21.443 [2024-07-25 15:12:13.391644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:21.443 15:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:21.443 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.703 [2024-07-25 15:12:13.650125] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:22.646 Initializing NVMe Controllers 00:16:22.646 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:22.646 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:22.646 Initialization complete. Launching workers. 00:16:22.646 submit (in ns) avg, min, max = 7364.8, 3899.2, 4001075.8 00:16:22.646 complete (in ns) avg, min, max = 19051.1, 2367.5, 4995455.0 00:16:22.646 00:16:22.646 Submit histogram 00:16:22.646 ================ 00:16:22.646 Range in us Cumulative Count 00:16:22.646 3.893 - 3.920: 1.2935% ( 250) 00:16:22.646 3.920 - 3.947: 7.5435% ( 1208) 00:16:22.646 3.947 - 3.973: 17.6325% ( 1950) 00:16:22.646 3.973 - 4.000: 29.0046% ( 2198) 00:16:22.646 4.000 - 4.027: 39.3264% ( 1995) 00:16:22.646 4.027 - 4.053: 49.9379% ( 2051) 00:16:22.646 4.053 - 4.080: 65.9458% ( 3094) 00:16:22.646 4.080 - 4.107: 81.3483% ( 2977) 00:16:22.646 4.107 - 4.133: 92.6014% ( 2175) 00:16:22.646 4.133 - 4.160: 97.5631% ( 959) 00:16:22.646 4.160 - 4.187: 99.0584% ( 289) 00:16:22.646 4.187 - 4.213: 99.4464% ( 75) 00:16:22.646 4.213 - 4.240: 99.4930% ( 9) 00:16:22.646 4.640 - 4.667: 99.4981% ( 1) 00:16:22.646 4.773 - 4.800: 99.5033% ( 1) 00:16:22.646 4.880 - 4.907: 99.5085% ( 1) 00:16:22.646 4.933 - 4.960: 99.5137% ( 1) 00:16:22.646 5.067 - 5.093: 99.5188% ( 1) 00:16:22.646 5.520 - 5.547: 99.5240% ( 1) 00:16:22.646 5.547 - 5.573: 99.5292% ( 1) 00:16:22.646 5.573 - 5.600: 99.5344% ( 1) 00:16:22.646 5.680 - 5.707: 99.5395% ( 1) 00:16:22.646 5.760 - 5.787: 99.5447% ( 1) 00:16:22.646 5.893 - 5.920: 99.5499% ( 1) 00:16:22.646 5.920 - 5.947: 99.5550% ( 1) 00:16:22.646 6.053 - 6.080: 99.5654% ( 2) 00:16:22.646 6.080 - 6.107: 99.5757% ( 2) 00:16:22.646 6.133 - 6.160: 99.5809% ( 1) 00:16:22.646 6.160 - 6.187: 99.5913% ( 2) 00:16:22.646 6.267 - 6.293: 99.5964% ( 1) 00:16:22.646 6.320 - 6.347: 99.6016% ( 1) 00:16:22.646 6.347 - 6.373: 99.6171% ( 3) 00:16:22.646 6.427 - 6.453: 99.6275% ( 2) 00:16:22.646 6.453 - 6.480: 99.6378% ( 2) 00:16:22.646 6.480 - 6.507: 99.6482% ( 2) 00:16:22.646 6.507 - 6.533: 99.6534% ( 1) 00:16:22.646 6.587 - 6.613: 99.6689% ( 3) 00:16:22.646 6.613 - 6.640: 99.6740% ( 1) 00:16:22.646 6.640 - 6.667: 99.6792% ( 1) 00:16:22.646 6.667 - 6.693: 99.6896% ( 2) 00:16:22.646 6.720 - 6.747: 99.6947% ( 1) 00:16:22.646 6.773 - 6.800: 99.6999% ( 1) 00:16:22.646 6.827 - 6.880: 99.7103% ( 2) 00:16:22.646 6.933 - 6.987: 99.7206% ( 2) 00:16:22.646 6.987 - 7.040: 99.7258% ( 1) 00:16:22.646 7.093 - 7.147: 99.7310% ( 1) 00:16:22.646 7.147 - 7.200: 99.7361% ( 1) 00:16:22.646 7.200 - 7.253: 99.7413% ( 1) 00:16:22.646 7.307 - 7.360: 99.7568% ( 3) 00:16:22.646 7.360 - 7.413: 99.7620% ( 1) 00:16:22.646 7.467 - 7.520: 99.7724% ( 2) 00:16:22.646 7.627 - 7.680: 99.7775% ( 1) 00:16:22.646 7.680 - 7.733: 99.7827% ( 1) 00:16:22.646 7.733 - 7.787: 99.7879% ( 1) 00:16:22.646 7.787 - 7.840: 99.7930% ( 1) 00:16:22.646 7.840 - 7.893: 99.7982% ( 1) 00:16:22.646 7.947 - 8.000: 99.8086% ( 2) 00:16:22.646 8.000 - 8.053: 99.8137% ( 1) 00:16:22.646 8.107 - 8.160: 99.8189% ( 1) 00:16:22.646 8.213 - 8.267: 99.8241% ( 1) 00:16:22.646 8.320 - 8.373: 99.8293% ( 1) 00:16:22.646 8.427 - 8.480: 99.8344% ( 1) 00:16:22.646 8.480 - 8.533: 99.8396% ( 1) 00:16:22.646 8.533 - 8.587: 99.8448% ( 1) 00:16:22.646 8.587 - 8.640: 99.8500% ( 1) 00:16:22.646 8.640 - 8.693: 99.8551% ( 1) 00:16:22.646 8.800 - 8.853: 99.8603% ( 1) 00:16:22.646 8.907 - 8.960: 99.8655% ( 1) 00:16:22.646 9.067 - 9.120: 99.8707% ( 1) 00:16:22.646 9.120 - 9.173: 99.8758% ( 1) 00:16:22.646 9.173 - 9.227: 99.8810% ( 1) 00:16:22.646 9.333 - 9.387: 99.8862% ( 1) 00:16:22.646 9.440 - 9.493: 99.8913% ( 1) 00:16:22.646 10.347 - 10.400: 99.8965% ( 1) 00:16:22.646 11.520 - 11.573: 99.9017% ( 1) 00:16:22.646 11.733 - 11.787: 99.9069% ( 1) 00:16:22.646 11.787 - 11.840: 99.9120% ( 1) 00:16:22.646 13.493 - 13.547: 99.9172% ( 1) 00:16:22.646 3986.773 - 4014.080: 100.0000% ( 16) 00:16:22.646 00:16:22.646 [2024-07-25 15:12:14.746908] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:22.646 Complete histogram 00:16:22.646 ================== 00:16:22.646 Range in us Cumulative Count 00:16:22.646 2.360 - 2.373: 0.0052% ( 1) 00:16:22.646 2.373 - 2.387: 0.0828% ( 15) 00:16:22.646 2.387 - 2.400: 0.9416% ( 166) 00:16:22.646 2.400 - 2.413: 1.0451% ( 20) 00:16:22.646 2.413 - 2.427: 5.2721% ( 817) 00:16:22.646 2.427 - 2.440: 50.8433% ( 8808) 00:16:22.646 2.440 - 2.453: 57.4607% ( 1279) 00:16:22.646 2.453 - 2.467: 76.3142% ( 3644) 00:16:22.646 2.467 - 2.480: 80.4325% ( 796) 00:16:22.646 2.480 - 2.493: 82.4762% ( 395) 00:16:22.646 2.493 - 2.507: 86.5221% ( 782) 00:16:22.646 2.507 - 2.520: 92.2185% ( 1101) 00:16:22.646 2.520 - 2.533: 95.7212% ( 677) 00:16:22.646 2.533 - 2.547: 97.9615% ( 433) 00:16:22.646 2.547 - 2.560: 99.0170% ( 204) 00:16:22.646 2.560 - 2.573: 99.2705% ( 49) 00:16:22.646 2.573 - 2.587: 99.3015% ( 6) 00:16:22.646 2.600 - 2.613: 99.3067% ( 1) 00:16:22.646 2.627 - 2.640: 99.3119% ( 1) 00:16:22.646 4.453 - 4.480: 99.3171% ( 1) 00:16:22.646 4.507 - 4.533: 99.3222% ( 1) 00:16:22.646 4.533 - 4.560: 99.3326% ( 2) 00:16:22.646 4.640 - 4.667: 99.3377% ( 1) 00:16:22.646 4.667 - 4.693: 99.3429% ( 1) 00:16:22.646 4.693 - 4.720: 99.3533% ( 2) 00:16:22.646 4.747 - 4.773: 99.3584% ( 1) 00:16:22.646 4.800 - 4.827: 99.3636% ( 1) 00:16:22.646 4.827 - 4.853: 99.3688% ( 1) 00:16:22.646 4.853 - 4.880: 99.3740% ( 1) 00:16:22.646 4.880 - 4.907: 99.3791% ( 1) 00:16:22.646 4.987 - 5.013: 99.3895% ( 2) 00:16:22.646 5.013 - 5.040: 99.3947% ( 1) 00:16:22.646 5.040 - 5.067: 99.3998% ( 1) 00:16:22.646 5.120 - 5.147: 99.4050% ( 1) 00:16:22.646 5.173 - 5.200: 99.4102% ( 1) 00:16:22.646 5.227 - 5.253: 99.4154% ( 1) 00:16:22.646 5.280 - 5.307: 99.4205% ( 1) 00:16:22.646 5.360 - 5.387: 99.4257% ( 1) 00:16:22.646 5.493 - 5.520: 99.4309% ( 1) 00:16:22.646 5.520 - 5.547: 99.4361% ( 1) 00:16:22.646 5.707 - 5.733: 99.4412% ( 1) 00:16:22.646 5.733 - 5.760: 99.4464% ( 1) 00:16:22.646 5.760 - 5.787: 99.4567% ( 2) 00:16:22.646 5.787 - 5.813: 99.4671% ( 2) 00:16:22.646 5.813 - 5.840: 99.4723% ( 1) 00:16:22.646 5.920 - 5.947: 99.4774% ( 1) 00:16:22.646 5.947 - 5.973: 99.4826% ( 1) 00:16:22.646 6.000 - 6.027: 99.4878% ( 1) 00:16:22.646 6.080 - 6.107: 99.4981% ( 2) 00:16:22.646 6.107 - 6.133: 99.5033% ( 1) 00:16:22.646 6.400 - 6.427: 99.5085% ( 1) 00:16:22.646 6.507 - 6.533: 99.5137% ( 1) 00:16:22.646 6.533 - 6.560: 99.5188% ( 1) 00:16:22.646 6.587 - 6.613: 99.5240% ( 1) 00:16:22.646 6.987 - 7.040: 99.5395% ( 3) 00:16:22.646 7.093 - 7.147: 99.5447% ( 1) 00:16:22.646 7.200 - 7.253: 99.5499% ( 1) 00:16:22.646 7.307 - 7.360: 99.5602% ( 2) 00:16:22.646 7.573 - 7.627: 99.5654% ( 1) 00:16:22.646 7.680 - 7.733: 99.5706% ( 1) 00:16:22.646 7.787 - 7.840: 99.5757% ( 1) 00:16:22.646 8.320 - 8.373: 99.5809% ( 1) 00:16:22.646 16.107 - 16.213: 99.5861% ( 1) 00:16:22.646 3986.773 - 4014.080: 99.9948% ( 79) 00:16:22.646 4969.813 - 4997.120: 100.0000% ( 1) 00:16:22.646 00:16:22.646 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:22.647 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:22.647 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:22.647 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:22.647 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:22.908 [ 00:16:22.908 { 00:16:22.908 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:22.908 "subtype": "Discovery", 00:16:22.908 "listen_addresses": [], 00:16:22.908 "allow_any_host": true, 00:16:22.908 "hosts": [] 00:16:22.908 }, 00:16:22.908 { 00:16:22.908 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:22.908 "subtype": "NVMe", 00:16:22.908 "listen_addresses": [ 00:16:22.908 { 00:16:22.908 "trtype": "VFIOUSER", 00:16:22.908 "adrfam": "IPv4", 00:16:22.908 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:22.908 "trsvcid": "0" 00:16:22.908 } 00:16:22.908 ], 00:16:22.908 "allow_any_host": true, 00:16:22.908 "hosts": [], 00:16:22.908 "serial_number": "SPDK1", 00:16:22.908 "model_number": "SPDK bdev Controller", 00:16:22.908 "max_namespaces": 32, 00:16:22.908 "min_cntlid": 1, 00:16:22.908 "max_cntlid": 65519, 00:16:22.908 "namespaces": [ 00:16:22.908 { 00:16:22.908 "nsid": 1, 00:16:22.908 "bdev_name": "Malloc1", 00:16:22.908 "name": "Malloc1", 00:16:22.908 "nguid": "11C26054480C4C25AA22C54D1B1CB937", 00:16:22.908 "uuid": "11c26054-480c-4c25-aa22-c54d1b1cb937" 00:16:22.908 }, 00:16:22.908 { 00:16:22.908 "nsid": 2, 00:16:22.908 "bdev_name": "Malloc3", 00:16:22.908 "name": "Malloc3", 00:16:22.908 "nguid": "B899FB9ADABF4666A1320B7283FC970B", 00:16:22.908 "uuid": "b899fb9a-dabf-4666-a132-0b7283fc970b" 00:16:22.908 } 00:16:22.908 ] 00:16:22.908 }, 00:16:22.908 { 00:16:22.908 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:22.908 "subtype": "NVMe", 00:16:22.908 "listen_addresses": [ 00:16:22.908 { 00:16:22.908 "trtype": "VFIOUSER", 00:16:22.908 "adrfam": "IPv4", 00:16:22.908 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:22.908 "trsvcid": "0" 00:16:22.908 } 00:16:22.908 ], 00:16:22.908 "allow_any_host": true, 00:16:22.908 "hosts": [], 00:16:22.908 "serial_number": "SPDK2", 00:16:22.908 "model_number": "SPDK bdev Controller", 00:16:22.908 "max_namespaces": 32, 00:16:22.908 "min_cntlid": 1, 00:16:22.908 "max_cntlid": 65519, 00:16:22.908 "namespaces": [ 00:16:22.908 { 00:16:22.908 "nsid": 1, 00:16:22.908 "bdev_name": "Malloc2", 00:16:22.908 "name": "Malloc2", 00:16:22.908 "nguid": "C69A51830CD143FF9BBD9AEE27A1B093", 00:16:22.908 "uuid": "c69a5183-0cd1-43ff-9bbd-9aee27a1b093" 00:16:22.908 } 00:16:22.908 ] 00:16:22.908 } 00:16:22.908 ] 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=227593 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:22.908 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:22.908 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.170 Malloc4 00:16:23.170 [2024-07-25 15:12:15.129617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.170 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:23.170 [2024-07-25 15:12:15.299761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.170 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:23.170 Asynchronous Event Request test 00:16:23.170 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.170 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.170 Registering asynchronous event callbacks... 00:16:23.170 Starting namespace attribute notice tests for all controllers... 00:16:23.170 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:23.170 aer_cb - Changed Namespace 00:16:23.170 Cleaning up... 00:16:23.431 [ 00:16:23.431 { 00:16:23.431 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:23.431 "subtype": "Discovery", 00:16:23.431 "listen_addresses": [], 00:16:23.431 "allow_any_host": true, 00:16:23.431 "hosts": [] 00:16:23.431 }, 00:16:23.431 { 00:16:23.431 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:23.431 "subtype": "NVMe", 00:16:23.431 "listen_addresses": [ 00:16:23.431 { 00:16:23.431 "trtype": "VFIOUSER", 00:16:23.431 "adrfam": "IPv4", 00:16:23.431 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:23.431 "trsvcid": "0" 00:16:23.431 } 00:16:23.431 ], 00:16:23.431 "allow_any_host": true, 00:16:23.431 "hosts": [], 00:16:23.431 "serial_number": "SPDK1", 00:16:23.431 "model_number": "SPDK bdev Controller", 00:16:23.431 "max_namespaces": 32, 00:16:23.431 "min_cntlid": 1, 00:16:23.431 "max_cntlid": 65519, 00:16:23.431 "namespaces": [ 00:16:23.431 { 00:16:23.431 "nsid": 1, 00:16:23.431 "bdev_name": "Malloc1", 00:16:23.431 "name": "Malloc1", 00:16:23.431 "nguid": "11C26054480C4C25AA22C54D1B1CB937", 00:16:23.431 "uuid": "11c26054-480c-4c25-aa22-c54d1b1cb937" 00:16:23.431 }, 00:16:23.431 { 00:16:23.431 "nsid": 2, 00:16:23.431 "bdev_name": "Malloc3", 00:16:23.431 "name": "Malloc3", 00:16:23.431 "nguid": "B899FB9ADABF4666A1320B7283FC970B", 00:16:23.431 "uuid": "b899fb9a-dabf-4666-a132-0b7283fc970b" 00:16:23.431 } 00:16:23.431 ] 00:16:23.431 }, 00:16:23.431 { 00:16:23.431 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:23.431 "subtype": "NVMe", 00:16:23.431 "listen_addresses": [ 00:16:23.431 { 00:16:23.431 "trtype": "VFIOUSER", 00:16:23.431 "adrfam": "IPv4", 00:16:23.431 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:23.431 "trsvcid": "0" 00:16:23.431 } 00:16:23.431 ], 00:16:23.431 "allow_any_host": true, 00:16:23.431 "hosts": [], 00:16:23.431 "serial_number": "SPDK2", 00:16:23.431 "model_number": "SPDK bdev Controller", 00:16:23.431 "max_namespaces": 32, 00:16:23.431 "min_cntlid": 1, 00:16:23.431 "max_cntlid": 65519, 00:16:23.431 "namespaces": [ 00:16:23.431 { 00:16:23.431 "nsid": 1, 00:16:23.431 "bdev_name": "Malloc2", 00:16:23.431 "name": "Malloc2", 00:16:23.431 "nguid": "C69A51830CD143FF9BBD9AEE27A1B093", 00:16:23.431 "uuid": "c69a5183-0cd1-43ff-9bbd-9aee27a1b093" 00:16:23.431 }, 00:16:23.431 { 00:16:23.431 "nsid": 2, 00:16:23.431 "bdev_name": "Malloc4", 00:16:23.431 "name": "Malloc4", 00:16:23.431 "nguid": "574F9E78436C443BBF2E3B730BC10FD6", 00:16:23.431 "uuid": "574f9e78-436c-443b-bf2e-3b730bc10fd6" 00:16:23.431 } 00:16:23.431 ] 00:16:23.431 } 00:16:23.431 ] 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 227593 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 218212 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 218212 ']' 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 218212 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 218212 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 218212' 00:16:23.431 killing process with pid 218212 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 218212 00:16:23.431 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 218212 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=227905 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 227905' 00:16:23.692 Process pid: 227905 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:23.692 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 227905 00:16:23.693 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 227905 ']' 00:16:23.693 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.693 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.693 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.693 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.693 15:12:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:23.693 [2024-07-25 15:12:15.785439] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:23.693 [2024-07-25 15:12:15.786366] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:23.693 [2024-07-25 15:12:15.786413] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.693 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.693 [2024-07-25 15:12:15.846568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.953 [2024-07-25 15:12:15.912282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.953 [2024-07-25 15:12:15.912319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.953 [2024-07-25 15:12:15.912326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.953 [2024-07-25 15:12:15.912332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.953 [2024-07-25 15:12:15.912338] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.953 [2024-07-25 15:12:15.912473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.953 [2024-07-25 15:12:15.912589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.953 [2024-07-25 15:12:15.912745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.953 [2024-07-25 15:12:15.912745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.954 [2024-07-25 15:12:15.973797] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:23.954 [2024-07-25 15:12:15.973808] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:23.954 [2024-07-25 15:12:15.974847] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:23.954 [2024-07-25 15:12:15.975435] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:23.954 [2024-07-25 15:12:15.975510] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:24.525 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.525 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:24.525 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:25.468 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:25.728 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:25.728 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:25.728 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:25.728 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:25.728 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:25.728 Malloc1 00:16:25.989 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:25.989 15:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:26.250 15:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:26.250 15:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:26.250 15:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:26.250 15:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:26.511 Malloc2 00:16:26.511 15:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:26.772 15:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:26.772 15:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 227905 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 227905 ']' 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 227905 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 227905 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 227905' 00:16:27.033 killing process with pid 227905 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 227905 00:16:27.033 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 227905 00:16:27.295 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:27.295 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:27.295 00:16:27.295 real 0m50.567s 00:16:27.295 user 3m20.469s 00:16:27.295 sys 0m2.977s 00:16:27.295 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.295 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:27.295 ************************************ 00:16:27.295 END TEST nvmf_vfio_user 00:16:27.295 ************************************ 00:16:27.295 15:12:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:27.295 15:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:27.295 15:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.295 15:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.295 ************************************ 00:16:27.295 START TEST nvmf_vfio_user_nvme_compliance 00:16:27.295 ************************************ 00:16:27.295 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:27.557 * Looking for test storage... 00:16:27.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=228650 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 228650' 00:16:27.557 Process pid: 228650 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 228650 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 228650 ']' 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.557 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.557 [2024-07-25 15:12:19.575669] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:27.557 [2024-07-25 15:12:19.575728] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.557 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.557 [2024-07-25 15:12:19.636792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:27.557 [2024-07-25 15:12:19.702497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.557 [2024-07-25 15:12:19.702531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.557 [2024-07-25 15:12:19.702538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.557 [2024-07-25 15:12:19.702545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.557 [2024-07-25 15:12:19.702550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.557 [2024-07-25 15:12:19.702793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.557 [2024-07-25 15:12:19.702927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.557 [2024-07-25 15:12:19.702930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.225 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.225 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:16:28.225 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:29.612 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:29.612 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:29.612 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:29.612 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.612 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.612 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.613 malloc0 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.613 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:29.613 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.613 00:16:29.613 00:16:29.613 CUnit - A unit testing framework for C - Version 2.1-3 00:16:29.613 http://cunit.sourceforge.net/ 00:16:29.613 00:16:29.613 00:16:29.613 Suite: nvme_compliance 00:16:29.613 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 15:12:21.619629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.613 [2024-07-25 15:12:21.620984] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:29.613 [2024-07-25 15:12:21.620995] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:29.613 [2024-07-25 15:12:21.620999] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:29.613 [2024-07-25 15:12:21.622643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.613 passed 00:16:29.613 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 15:12:21.718251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.613 [2024-07-25 15:12:21.721270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.613 passed 00:16:29.874 Test: admin_identify_ns ...[2024-07-25 15:12:21.816461] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.874 [2024-07-25 15:12:21.880211] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:29.874 [2024-07-25 15:12:21.888214] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:29.874 [2024-07-25 15:12:21.909326] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.874 passed 00:16:29.875 Test: admin_get_features_mandatory_features ...[2024-07-25 15:12:22.001941] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.875 [2024-07-25 15:12:22.004967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.875 passed 00:16:30.136 Test: admin_get_features_optional_features ...[2024-07-25 15:12:22.098501] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.136 [2024-07-25 15:12:22.101527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.136 passed 00:16:30.136 Test: admin_set_features_number_of_queues ...[2024-07-25 15:12:22.193432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.136 [2024-07-25 15:12:22.298313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.397 passed 00:16:30.397 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 15:12:22.392347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.397 [2024-07-25 15:12:22.395370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.397 passed 00:16:30.397 Test: admin_get_log_page_with_lpo ...[2024-07-25 15:12:22.488447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.398 [2024-07-25 15:12:22.556213] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:30.398 [2024-07-25 15:12:22.569252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.659 passed 00:16:30.659 Test: fabric_property_get ...[2024-07-25 15:12:22.663285] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.659 [2024-07-25 15:12:22.664531] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:30.659 [2024-07-25 15:12:22.666309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.659 passed 00:16:30.659 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 15:12:22.759892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.659 [2024-07-25 15:12:22.761136] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:30.659 [2024-07-25 15:12:22.762914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.659 passed 00:16:30.920 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 15:12:22.857458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.920 [2024-07-25 15:12:22.941208] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:30.920 [2024-07-25 15:12:22.957209] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:30.920 [2024-07-25 15:12:22.962297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.920 passed 00:16:30.920 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 15:12:23.055422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.920 [2024-07-25 15:12:23.056665] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:30.920 [2024-07-25 15:12:23.058436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.920 passed 00:16:31.181 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 15:12:23.151554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.181 [2024-07-25 15:12:23.227206] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:31.181 [2024-07-25 15:12:23.251206] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:31.181 [2024-07-25 15:12:23.256294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.181 passed 00:16:31.181 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 15:12:23.347894] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.181 [2024-07-25 15:12:23.349133] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:31.181 [2024-07-25 15:12:23.349153] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:31.181 [2024-07-25 15:12:23.350917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.441 passed 00:16:31.441 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 15:12:23.441988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.441 [2024-07-25 15:12:23.537210] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:31.441 [2024-07-25 15:12:23.545216] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:31.442 [2024-07-25 15:12:23.553217] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:31.442 [2024-07-25 15:12:23.561211] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:31.442 [2024-07-25 15:12:23.590288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.442 passed 00:16:31.702 Test: admin_create_io_sq_verify_pc ...[2024-07-25 15:12:23.681866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.702 [2024-07-25 15:12:23.698216] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:31.702 [2024-07-25 15:12:23.716030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.702 passed 00:16:31.702 Test: admin_create_io_qp_max_qps ...[2024-07-25 15:12:23.809592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.086 [2024-07-25 15:12:24.921210] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:33.348 [2024-07-25 15:12:25.325976] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.348 passed 00:16:33.348 Test: admin_create_io_sq_shared_cq ...[2024-07-25 15:12:25.418122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.609 [2024-07-25 15:12:25.549210] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:33.609 [2024-07-25 15:12:25.586263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.609 passed 00:16:33.609 00:16:33.609 Run Summary: Type Total Ran Passed Failed Inactive 00:16:33.609 suites 1 1 n/a 0 0 00:16:33.609 tests 18 18 18 0 0 00:16:33.609 asserts 360 360 360 0 n/a 00:16:33.609 00:16:33.609 Elapsed time = 1.665 seconds 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 228650 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 228650 ']' 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 228650 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 228650 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 228650' 00:16:33.609 killing process with pid 228650 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 228650 00:16:33.609 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 228650 00:16:33.872 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:33.872 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:33.872 00:16:33.872 real 0m6.425s 00:16:33.872 user 0m18.482s 00:16:33.872 sys 0m0.431s 00:16:33.872 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.872 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:33.872 ************************************ 00:16:33.872 END TEST nvmf_vfio_user_nvme_compliance 00:16:33.872 ************************************ 00:16:33.872 15:12:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:33.872 15:12:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:33.872 15:12:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.872 15:12:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:33.872 ************************************ 00:16:33.872 START TEST nvmf_vfio_user_fuzz 00:16:33.872 ************************************ 00:16:33.872 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:33.872 * Looking for test storage... 00:16:33.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=230052 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 230052' 00:16:33.872 Process pid: 230052 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 230052 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 230052 ']' 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.872 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:34.815 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.815 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:34.815 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:35.758 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:35.758 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.758 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.758 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.759 malloc0 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.759 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:36.020 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.020 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:36.020 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.020 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:36.020 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.020 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:36.020 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:08.139 Fuzzing completed. Shutting down the fuzz application 00:17:08.139 00:17:08.139 Dumping successful admin opcodes: 00:17:08.139 8, 9, 10, 24, 00:17:08.139 Dumping successful io opcodes: 00:17:08.139 0, 00:17:08.139 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1136241, total successful commands: 4475, random_seed: 3210617984 00:17:08.139 NS: 0x200003a1ef00 admin qp, Total commands completed: 142968, total successful commands: 1162, random_seed: 625561344 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 230052 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 230052 ']' 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 230052 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 230052 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 230052' 00:17:08.139 killing process with pid 230052 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 230052 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 230052 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:08.139 00:17:08.139 real 0m33.706s 00:17:08.139 user 0m38.122s 00:17:08.139 sys 0m25.577s 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:08.139 ************************************ 00:17:08.139 END TEST nvmf_vfio_user_fuzz 00:17:08.139 ************************************ 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.139 ************************************ 00:17:08.139 START TEST nvmf_auth_target 00:17:08.139 ************************************ 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:08.139 * Looking for test storage... 00:17:08.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.139 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.140 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:14.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:14.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:14.770 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:14.770 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.770 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:14.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:17:14.771 00:17:14.771 --- 10.0.0.2 ping statistics --- 00:17:14.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.771 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:17:14.771 00:17:14.771 --- 10.0.0.1 ping statistics --- 00:17:14.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.771 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=240038 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 240038 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 240038 ']' 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.771 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=240379 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=19abd92289b131de5f749efeb6352395bddf5cca5f302cc0 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UEd 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 19abd92289b131de5f749efeb6352395bddf5cca5f302cc0 0 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 19abd92289b131de5f749efeb6352395bddf5cca5f302cc0 0 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=19abd92289b131de5f749efeb6352395bddf5cca5f302cc0 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UEd 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UEd 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.UEd 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=93e7ff4b34df923775dece1b96c3156d8ac8c419ec738e5635adb02cc16e2e11 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EGb 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 93e7ff4b34df923775dece1b96c3156d8ac8c419ec738e5635adb02cc16e2e11 3 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 93e7ff4b34df923775dece1b96c3156d8ac8c419ec738e5635adb02cc16e2e11 3 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=93e7ff4b34df923775dece1b96c3156d8ac8c419ec738e5635adb02cc16e2e11 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EGb 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EGb 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.EGb 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.716 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d2ed94ed19f1b3377d88cd01ca9ac42c 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8od 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d2ed94ed19f1b3377d88cd01ca9ac42c 1 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d2ed94ed19f1b3377d88cd01ca9ac42c 1 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d2ed94ed19f1b3377d88cd01ca9ac42c 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:15.717 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8od 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8od 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.8od 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dba51d633918eda5981518186df8a863b5fa1d4bfdfe4826 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.PlJ 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dba51d633918eda5981518186df8a863b5fa1d4bfdfe4826 2 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dba51d633918eda5981518186df8a863b5fa1d4bfdfe4826 2 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dba51d633918eda5981518186df8a863b5fa1d4bfdfe4826 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.PlJ 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.PlJ 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.PlJ 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=75f2565c59b3acf212ea648967c2b2ce82c052d31dc309ad 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Tqb 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 75f2565c59b3acf212ea648967c2b2ce82c052d31dc309ad 2 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 75f2565c59b3acf212ea648967c2b2ce82c052d31dc309ad 2 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=75f2565c59b3acf212ea648967c2b2ce82c052d31dc309ad 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:15.979 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:15.979 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Tqb 00:17:15.979 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Tqb 00:17:15.979 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Tqb 00:17:15.979 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c4cc92839451033e641506bab4c7a286 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.I0F 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c4cc92839451033e641506bab4c7a286 1 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c4cc92839451033e641506bab4c7a286 1 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c4cc92839451033e641506bab4c7a286 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.I0F 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.I0F 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.I0F 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7d2b8c3cd9de2201aab16a28b5ca452b04a4e3c366f07d807beab185ef035a27 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wld 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7d2b8c3cd9de2201aab16a28b5ca452b04a4e3c366f07d807beab185ef035a27 3 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7d2b8c3cd9de2201aab16a28b5ca452b04a4e3c366f07d807beab185ef035a27 3 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7d2b8c3cd9de2201aab16a28b5ca452b04a4e3c366f07d807beab185ef035a27 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wld 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wld 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.wld 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 240038 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 240038 ']' 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.980 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.242 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.242 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:16.242 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 240379 /var/tmp/host.sock 00:17:16.242 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 240379 ']' 00:17:16.242 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:16.242 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.242 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:16.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:16.242 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.242 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UEd 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UEd 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UEd 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.EGb ]] 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EGb 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EGb 00:17:16.503 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EGb 00:17:16.764 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:16.764 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8od 00:17:16.764 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.764 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.764 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.764 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8od 00:17:16.764 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8od 00:17:17.026 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.PlJ ]] 00:17:17.026 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PlJ 00:17:17.026 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.026 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PlJ 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PlJ 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Tqb 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Tqb 00:17:17.026 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Tqb 00:17:17.288 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.I0F ]] 00:17:17.288 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I0F 00:17:17.288 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.288 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.288 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.288 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I0F 00:17:17.288 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I0F 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wld 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.wld 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.wld 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.549 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.811 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.072 00:17:18.072 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.072 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.072 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.072 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.333 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.333 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.333 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.333 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.333 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.333 { 00:17:18.333 "cntlid": 1, 00:17:18.333 "qid": 0, 00:17:18.333 "state": "enabled", 00:17:18.333 "thread": "nvmf_tgt_poll_group_000", 00:17:18.334 "listen_address": { 00:17:18.334 "trtype": "TCP", 00:17:18.334 "adrfam": "IPv4", 00:17:18.334 "traddr": "10.0.0.2", 00:17:18.334 "trsvcid": "4420" 00:17:18.334 }, 00:17:18.334 "peer_address": { 00:17:18.334 "trtype": "TCP", 00:17:18.334 "adrfam": "IPv4", 00:17:18.334 "traddr": "10.0.0.1", 00:17:18.334 "trsvcid": "41388" 00:17:18.334 }, 00:17:18.334 "auth": { 00:17:18.334 "state": "completed", 00:17:18.334 "digest": "sha256", 00:17:18.334 "dhgroup": "null" 00:17:18.334 } 00:17:18.334 } 00:17:18.334 ]' 00:17:18.334 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.334 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.334 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.334 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:18.334 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.334 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.334 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.334 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.595 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:17:19.168 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.168 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.168 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.168 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.168 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.168 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.168 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.168 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.430 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.691 00:17:19.691 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.691 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.692 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.953 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.953 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.953 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.953 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.953 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.953 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.953 { 00:17:19.953 "cntlid": 3, 00:17:19.953 "qid": 0, 00:17:19.953 "state": "enabled", 00:17:19.953 "thread": "nvmf_tgt_poll_group_000", 00:17:19.953 "listen_address": { 00:17:19.953 "trtype": "TCP", 00:17:19.953 "adrfam": "IPv4", 00:17:19.953 "traddr": "10.0.0.2", 00:17:19.953 "trsvcid": "4420" 00:17:19.953 }, 00:17:19.953 "peer_address": { 00:17:19.953 "trtype": "TCP", 00:17:19.953 "adrfam": "IPv4", 00:17:19.953 "traddr": "10.0.0.1", 00:17:19.953 "trsvcid": "41410" 00:17:19.953 }, 00:17:19.953 "auth": { 00:17:19.953 "state": "completed", 00:17:19.953 "digest": "sha256", 00:17:19.953 "dhgroup": "null" 00:17:19.953 } 00:17:19.953 } 00:17:19.953 ]' 00:17:19.953 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.953 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.953 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.953 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:19.953 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.953 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.953 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.953 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.215 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:17:21.159 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.159 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.159 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.159 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.419 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.419 { 00:17:21.419 "cntlid": 5, 00:17:21.419 "qid": 0, 00:17:21.419 "state": "enabled", 00:17:21.419 "thread": "nvmf_tgt_poll_group_000", 00:17:21.419 "listen_address": { 00:17:21.419 "trtype": "TCP", 00:17:21.419 "adrfam": "IPv4", 00:17:21.419 "traddr": "10.0.0.2", 00:17:21.419 "trsvcid": "4420" 00:17:21.419 }, 00:17:21.419 "peer_address": { 00:17:21.419 "trtype": "TCP", 00:17:21.419 "adrfam": "IPv4", 00:17:21.419 "traddr": "10.0.0.1", 00:17:21.419 "trsvcid": "41432" 00:17:21.419 }, 00:17:21.419 "auth": { 00:17:21.419 "state": "completed", 00:17:21.419 "digest": "sha256", 00:17:21.419 "dhgroup": "null" 00:17:21.419 } 00:17:21.419 } 00:17:21.419 ]' 00:17:21.419 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.680 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.680 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.680 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:21.680 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.680 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.680 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.680 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.941 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:17:22.514 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.514 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.514 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.514 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.514 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.514 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.514 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.514 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.775 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.037 00:17:23.037 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.037 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.037 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.037 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.037 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.037 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.037 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.298 { 00:17:23.298 "cntlid": 7, 00:17:23.298 "qid": 0, 00:17:23.298 "state": "enabled", 00:17:23.298 "thread": "nvmf_tgt_poll_group_000", 00:17:23.298 "listen_address": { 00:17:23.298 "trtype": "TCP", 00:17:23.298 "adrfam": "IPv4", 00:17:23.298 "traddr": "10.0.0.2", 00:17:23.298 "trsvcid": "4420" 00:17:23.298 }, 00:17:23.298 "peer_address": { 00:17:23.298 "trtype": "TCP", 00:17:23.298 "adrfam": "IPv4", 00:17:23.298 "traddr": "10.0.0.1", 00:17:23.298 "trsvcid": "50674" 00:17:23.298 }, 00:17:23.298 "auth": { 00:17:23.298 "state": "completed", 00:17:23.298 "digest": "sha256", 00:17:23.298 "dhgroup": "null" 00:17:23.298 } 00:17:23.298 } 00:17:23.298 ]' 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.298 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.560 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:17:24.131 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.132 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.132 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.132 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.132 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.132 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.132 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.132 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.132 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.392 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.652 00:17:24.652 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.652 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.652 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.912 { 00:17:24.912 "cntlid": 9, 00:17:24.912 "qid": 0, 00:17:24.912 "state": "enabled", 00:17:24.912 "thread": "nvmf_tgt_poll_group_000", 00:17:24.912 "listen_address": { 00:17:24.912 "trtype": "TCP", 00:17:24.912 "adrfam": "IPv4", 00:17:24.912 "traddr": "10.0.0.2", 00:17:24.912 "trsvcid": "4420" 00:17:24.912 }, 00:17:24.912 "peer_address": { 00:17:24.912 "trtype": "TCP", 00:17:24.912 "adrfam": "IPv4", 00:17:24.912 "traddr": "10.0.0.1", 00:17:24.912 "trsvcid": "50704" 00:17:24.912 }, 00:17:24.912 "auth": { 00:17:24.912 "state": "completed", 00:17:24.912 "digest": "sha256", 00:17:24.912 "dhgroup": "ffdhe2048" 00:17:24.912 } 00:17:24.912 } 00:17:24.912 ]' 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.912 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.912 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.912 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.912 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.172 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:17:25.744 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.744 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.744 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.744 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.744 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.744 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.744 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.744 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.005 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.266 00:17:26.266 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.266 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.266 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.527 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.527 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.527 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.527 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.527 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.527 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.527 { 00:17:26.527 "cntlid": 11, 00:17:26.527 "qid": 0, 00:17:26.527 "state": "enabled", 00:17:26.527 "thread": "nvmf_tgt_poll_group_000", 00:17:26.527 "listen_address": { 00:17:26.527 "trtype": "TCP", 00:17:26.527 "adrfam": "IPv4", 00:17:26.527 "traddr": "10.0.0.2", 00:17:26.527 "trsvcid": "4420" 00:17:26.527 }, 00:17:26.527 "peer_address": { 00:17:26.527 "trtype": "TCP", 00:17:26.527 "adrfam": "IPv4", 00:17:26.527 "traddr": "10.0.0.1", 00:17:26.527 "trsvcid": "50746" 00:17:26.527 }, 00:17:26.527 "auth": { 00:17:26.527 "state": "completed", 00:17:26.527 "digest": "sha256", 00:17:26.527 "dhgroup": "ffdhe2048" 00:17:26.527 } 00:17:26.527 } 00:17:26.528 ]' 00:17:26.528 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.528 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.528 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.528 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.528 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.528 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.528 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.528 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.789 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.731 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:27.732 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:27.732 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.732 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.732 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.732 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.732 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.732 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.732 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.993 00:17:27.993 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.993 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.993 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.993 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.993 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.993 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.993 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.993 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.993 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.993 { 00:17:27.993 "cntlid": 13, 00:17:27.993 "qid": 0, 00:17:27.993 "state": "enabled", 00:17:27.993 "thread": "nvmf_tgt_poll_group_000", 00:17:27.993 "listen_address": { 00:17:27.993 "trtype": "TCP", 00:17:27.993 "adrfam": "IPv4", 00:17:27.993 "traddr": "10.0.0.2", 00:17:27.993 "trsvcid": "4420" 00:17:27.993 }, 00:17:27.993 "peer_address": { 00:17:27.993 "trtype": "TCP", 00:17:27.993 "adrfam": "IPv4", 00:17:27.993 "traddr": "10.0.0.1", 00:17:27.993 "trsvcid": "50776" 00:17:27.993 }, 00:17:27.993 "auth": { 00:17:27.993 "state": "completed", 00:17:27.993 "digest": "sha256", 00:17:27.993 "dhgroup": "ffdhe2048" 00:17:27.993 } 00:17:27.993 } 00:17:27.993 ]' 00:17:27.993 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.254 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.254 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.254 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.254 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.254 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.254 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.254 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.515 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:17:29.086 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.086 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.086 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.086 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.086 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.086 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.086 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.086 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.347 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:29.347 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.348 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.609 00:17:29.609 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.609 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.609 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.609 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.870 { 00:17:29.870 "cntlid": 15, 00:17:29.870 "qid": 0, 00:17:29.870 "state": "enabled", 00:17:29.870 "thread": "nvmf_tgt_poll_group_000", 00:17:29.870 "listen_address": { 00:17:29.870 "trtype": "TCP", 00:17:29.870 "adrfam": "IPv4", 00:17:29.870 "traddr": "10.0.0.2", 00:17:29.870 "trsvcid": "4420" 00:17:29.870 }, 00:17:29.870 "peer_address": { 00:17:29.870 "trtype": "TCP", 00:17:29.870 "adrfam": "IPv4", 00:17:29.870 "traddr": "10.0.0.1", 00:17:29.870 "trsvcid": "50810" 00:17:29.870 }, 00:17:29.870 "auth": { 00:17:29.870 "state": "completed", 00:17:29.870 "digest": "sha256", 00:17:29.870 "dhgroup": "ffdhe2048" 00:17:29.870 } 00:17:29.870 } 00:17:29.870 ]' 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.870 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.130 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:17:30.703 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.703 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.703 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.703 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.703 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.703 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.703 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.703 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.703 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.964 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.242 00:17:31.242 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.242 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.242 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.526 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.526 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.526 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.526 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.526 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.526 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.526 { 00:17:31.526 "cntlid": 17, 00:17:31.526 "qid": 0, 00:17:31.526 "state": "enabled", 00:17:31.526 "thread": "nvmf_tgt_poll_group_000", 00:17:31.526 "listen_address": { 00:17:31.526 "trtype": "TCP", 00:17:31.526 "adrfam": "IPv4", 00:17:31.526 "traddr": "10.0.0.2", 00:17:31.526 "trsvcid": "4420" 00:17:31.526 }, 00:17:31.526 "peer_address": { 00:17:31.526 "trtype": "TCP", 00:17:31.526 "adrfam": "IPv4", 00:17:31.526 "traddr": "10.0.0.1", 00:17:31.526 "trsvcid": "50834" 00:17:31.526 }, 00:17:31.527 "auth": { 00:17:31.527 "state": "completed", 00:17:31.527 "digest": "sha256", 00:17:31.527 "dhgroup": "ffdhe3072" 00:17:31.527 } 00:17:31.527 } 00:17:31.527 ]' 00:17:31.527 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.527 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.527 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.527 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.527 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.527 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.527 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.527 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.787 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:17:32.360 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.360 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.360 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.360 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.360 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.360 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.360 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.360 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.621 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:32.621 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.621 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.621 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:32.621 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.621 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.621 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.621 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.622 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.622 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.622 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.622 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.882 00:17:32.882 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.882 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.882 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.144 { 00:17:33.144 "cntlid": 19, 00:17:33.144 "qid": 0, 00:17:33.144 "state": "enabled", 00:17:33.144 "thread": "nvmf_tgt_poll_group_000", 00:17:33.144 "listen_address": { 00:17:33.144 "trtype": "TCP", 00:17:33.144 "adrfam": "IPv4", 00:17:33.144 "traddr": "10.0.0.2", 00:17:33.144 "trsvcid": "4420" 00:17:33.144 }, 00:17:33.144 "peer_address": { 00:17:33.144 "trtype": "TCP", 00:17:33.144 "adrfam": "IPv4", 00:17:33.144 "traddr": "10.0.0.1", 00:17:33.144 "trsvcid": "33094" 00:17:33.144 }, 00:17:33.144 "auth": { 00:17:33.144 "state": "completed", 00:17:33.144 "digest": "sha256", 00:17:33.144 "dhgroup": "ffdhe3072" 00:17:33.144 } 00:17:33.144 } 00:17:33.144 ]' 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.144 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.405 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:17:33.977 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.239 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.500 00:17:34.500 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.500 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.500 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.761 { 00:17:34.761 "cntlid": 21, 00:17:34.761 "qid": 0, 00:17:34.761 "state": "enabled", 00:17:34.761 "thread": "nvmf_tgt_poll_group_000", 00:17:34.761 "listen_address": { 00:17:34.761 "trtype": "TCP", 00:17:34.761 "adrfam": "IPv4", 00:17:34.761 "traddr": "10.0.0.2", 00:17:34.761 "trsvcid": "4420" 00:17:34.761 }, 00:17:34.761 "peer_address": { 00:17:34.761 "trtype": "TCP", 00:17:34.761 "adrfam": "IPv4", 00:17:34.761 "traddr": "10.0.0.1", 00:17:34.761 "trsvcid": "33126" 00:17:34.761 }, 00:17:34.761 "auth": { 00:17:34.761 "state": "completed", 00:17:34.761 "digest": "sha256", 00:17:34.761 "dhgroup": "ffdhe3072" 00:17:34.761 } 00:17:34.761 } 00:17:34.761 ]' 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.761 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.022 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:17:35.967 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.967 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.967 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.967 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.967 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.968 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.968 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.968 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.968 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.229 00:17:36.229 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.229 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.229 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.490 { 00:17:36.490 "cntlid": 23, 00:17:36.490 "qid": 0, 00:17:36.490 "state": "enabled", 00:17:36.490 "thread": "nvmf_tgt_poll_group_000", 00:17:36.490 "listen_address": { 00:17:36.490 "trtype": "TCP", 00:17:36.490 "adrfam": "IPv4", 00:17:36.490 "traddr": "10.0.0.2", 00:17:36.490 "trsvcid": "4420" 00:17:36.490 }, 00:17:36.490 "peer_address": { 00:17:36.490 "trtype": "TCP", 00:17:36.490 "adrfam": "IPv4", 00:17:36.490 "traddr": "10.0.0.1", 00:17:36.490 "trsvcid": "33146" 00:17:36.490 }, 00:17:36.490 "auth": { 00:17:36.490 "state": "completed", 00:17:36.490 "digest": "sha256", 00:17:36.490 "dhgroup": "ffdhe3072" 00:17:36.490 } 00:17:36.490 } 00:17:36.490 ]' 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.490 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.751 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:17:37.322 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.322 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.322 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.322 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.322 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.582 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.583 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.842 00:17:37.842 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.842 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.842 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.104 { 00:17:38.104 "cntlid": 25, 00:17:38.104 "qid": 0, 00:17:38.104 "state": "enabled", 00:17:38.104 "thread": "nvmf_tgt_poll_group_000", 00:17:38.104 "listen_address": { 00:17:38.104 "trtype": "TCP", 00:17:38.104 "adrfam": "IPv4", 00:17:38.104 "traddr": "10.0.0.2", 00:17:38.104 "trsvcid": "4420" 00:17:38.104 }, 00:17:38.104 "peer_address": { 00:17:38.104 "trtype": "TCP", 00:17:38.104 "adrfam": "IPv4", 00:17:38.104 "traddr": "10.0.0.1", 00:17:38.104 "trsvcid": "33168" 00:17:38.104 }, 00:17:38.104 "auth": { 00:17:38.104 "state": "completed", 00:17:38.104 "digest": "sha256", 00:17:38.104 "dhgroup": "ffdhe4096" 00:17:38.104 } 00:17:38.104 } 00:17:38.104 ]' 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.104 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.365 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.307 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.567 00:17:39.567 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.567 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.567 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.828 { 00:17:39.828 "cntlid": 27, 00:17:39.828 "qid": 0, 00:17:39.828 "state": "enabled", 00:17:39.828 "thread": "nvmf_tgt_poll_group_000", 00:17:39.828 "listen_address": { 00:17:39.828 "trtype": "TCP", 00:17:39.828 "adrfam": "IPv4", 00:17:39.828 "traddr": "10.0.0.2", 00:17:39.828 "trsvcid": "4420" 00:17:39.828 }, 00:17:39.828 "peer_address": { 00:17:39.828 "trtype": "TCP", 00:17:39.828 "adrfam": "IPv4", 00:17:39.828 "traddr": "10.0.0.1", 00:17:39.828 "trsvcid": "33204" 00:17:39.828 }, 00:17:39.828 "auth": { 00:17:39.828 "state": "completed", 00:17:39.828 "digest": "sha256", 00:17:39.828 "dhgroup": "ffdhe4096" 00:17:39.828 } 00:17:39.828 } 00:17:39.828 ]' 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.828 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.089 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:17:41.033 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.033 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.033 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.033 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.033 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.033 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.033 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.033 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.033 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.293 00:17:41.293 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.293 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.293 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.293 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.293 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.293 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.293 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.554 { 00:17:41.554 "cntlid": 29, 00:17:41.554 "qid": 0, 00:17:41.554 "state": "enabled", 00:17:41.554 "thread": "nvmf_tgt_poll_group_000", 00:17:41.554 "listen_address": { 00:17:41.554 "trtype": "TCP", 00:17:41.554 "adrfam": "IPv4", 00:17:41.554 "traddr": "10.0.0.2", 00:17:41.554 "trsvcid": "4420" 00:17:41.554 }, 00:17:41.554 "peer_address": { 00:17:41.554 "trtype": "TCP", 00:17:41.554 "adrfam": "IPv4", 00:17:41.554 "traddr": "10.0.0.1", 00:17:41.554 "trsvcid": "33228" 00:17:41.554 }, 00:17:41.554 "auth": { 00:17:41.554 "state": "completed", 00:17:41.554 "digest": "sha256", 00:17:41.554 "dhgroup": "ffdhe4096" 00:17:41.554 } 00:17:41.554 } 00:17:41.554 ]' 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.554 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.815 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:17:42.387 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.387 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.387 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.387 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.387 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.387 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.387 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.387 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.647 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.907 00:17:42.907 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.907 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.907 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.168 { 00:17:43.168 "cntlid": 31, 00:17:43.168 "qid": 0, 00:17:43.168 "state": "enabled", 00:17:43.168 "thread": "nvmf_tgt_poll_group_000", 00:17:43.168 "listen_address": { 00:17:43.168 "trtype": "TCP", 00:17:43.168 "adrfam": "IPv4", 00:17:43.168 "traddr": "10.0.0.2", 00:17:43.168 "trsvcid": "4420" 00:17:43.168 }, 00:17:43.168 "peer_address": { 00:17:43.168 "trtype": "TCP", 00:17:43.168 "adrfam": "IPv4", 00:17:43.168 "traddr": "10.0.0.1", 00:17:43.168 "trsvcid": "57418" 00:17:43.168 }, 00:17:43.168 "auth": { 00:17:43.168 "state": "completed", 00:17:43.168 "digest": "sha256", 00:17:43.168 "dhgroup": "ffdhe4096" 00:17:43.168 } 00:17:43.168 } 00:17:43.168 ]' 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.168 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.429 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.371 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.372 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.372 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.372 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.372 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.372 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.372 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.633 00:17:44.633 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.633 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.633 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.895 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.895 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.895 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.895 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.895 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.895 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.895 { 00:17:44.895 "cntlid": 33, 00:17:44.895 "qid": 0, 00:17:44.895 "state": "enabled", 00:17:44.895 "thread": "nvmf_tgt_poll_group_000", 00:17:44.895 "listen_address": { 00:17:44.895 "trtype": "TCP", 00:17:44.895 "adrfam": "IPv4", 00:17:44.895 "traddr": "10.0.0.2", 00:17:44.895 "trsvcid": "4420" 00:17:44.895 }, 00:17:44.895 "peer_address": { 00:17:44.895 "trtype": "TCP", 00:17:44.895 "adrfam": "IPv4", 00:17:44.895 "traddr": "10.0.0.1", 00:17:44.895 "trsvcid": "57436" 00:17:44.895 }, 00:17:44.895 "auth": { 00:17:44.895 "state": "completed", 00:17:44.895 "digest": "sha256", 00:17:44.895 "dhgroup": "ffdhe6144" 00:17:44.895 } 00:17:44.895 } 00:17:44.895 ]' 00:17:44.895 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.895 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.895 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.895 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.895 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.895 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.895 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.895 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.156 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:17:46.100 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.100 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.361 00:17:46.361 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.361 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.361 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.622 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.622 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.622 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.622 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.622 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.622 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.622 { 00:17:46.622 "cntlid": 35, 00:17:46.622 "qid": 0, 00:17:46.622 "state": "enabled", 00:17:46.622 "thread": "nvmf_tgt_poll_group_000", 00:17:46.622 "listen_address": { 00:17:46.622 "trtype": "TCP", 00:17:46.622 "adrfam": "IPv4", 00:17:46.622 "traddr": "10.0.0.2", 00:17:46.622 "trsvcid": "4420" 00:17:46.622 }, 00:17:46.622 "peer_address": { 00:17:46.622 "trtype": "TCP", 00:17:46.622 "adrfam": "IPv4", 00:17:46.622 "traddr": "10.0.0.1", 00:17:46.622 "trsvcid": "57466" 00:17:46.622 }, 00:17:46.622 "auth": { 00:17:46.622 "state": "completed", 00:17:46.622 "digest": "sha256", 00:17:46.622 "dhgroup": "ffdhe6144" 00:17:46.622 } 00:17:46.622 } 00:17:46.622 ]' 00:17:46.622 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.622 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.622 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.883 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.883 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.883 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.883 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.883 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.883 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.828 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.444 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.445 { 00:17:48.445 "cntlid": 37, 00:17:48.445 "qid": 0, 00:17:48.445 "state": "enabled", 00:17:48.445 "thread": "nvmf_tgt_poll_group_000", 00:17:48.445 "listen_address": { 00:17:48.445 "trtype": "TCP", 00:17:48.445 "adrfam": "IPv4", 00:17:48.445 "traddr": "10.0.0.2", 00:17:48.445 "trsvcid": "4420" 00:17:48.445 }, 00:17:48.445 "peer_address": { 00:17:48.445 "trtype": "TCP", 00:17:48.445 "adrfam": "IPv4", 00:17:48.445 "traddr": "10.0.0.1", 00:17:48.445 "trsvcid": "57490" 00:17:48.445 }, 00:17:48.445 "auth": { 00:17:48.445 "state": "completed", 00:17:48.445 "digest": "sha256", 00:17:48.445 "dhgroup": "ffdhe6144" 00:17:48.445 } 00:17:48.445 } 00:17:48.445 ]' 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.445 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.705 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.705 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.705 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.705 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.648 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.909 00:17:49.909 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.909 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.909 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.171 { 00:17:50.171 "cntlid": 39, 00:17:50.171 "qid": 0, 00:17:50.171 "state": "enabled", 00:17:50.171 "thread": "nvmf_tgt_poll_group_000", 00:17:50.171 "listen_address": { 00:17:50.171 "trtype": "TCP", 00:17:50.171 "adrfam": "IPv4", 00:17:50.171 "traddr": "10.0.0.2", 00:17:50.171 "trsvcid": "4420" 00:17:50.171 }, 00:17:50.171 "peer_address": { 00:17:50.171 "trtype": "TCP", 00:17:50.171 "adrfam": "IPv4", 00:17:50.171 "traddr": "10.0.0.1", 00:17:50.171 "trsvcid": "57512" 00:17:50.171 }, 00:17:50.171 "auth": { 00:17:50.171 "state": "completed", 00:17:50.171 "digest": "sha256", 00:17:50.171 "dhgroup": "ffdhe6144" 00:17:50.171 } 00:17:50.171 } 00:17:50.171 ]' 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.171 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.432 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.432 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.432 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.432 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.400 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.973 00:17:51.973 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.973 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.973 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.235 { 00:17:52.235 "cntlid": 41, 00:17:52.235 "qid": 0, 00:17:52.235 "state": "enabled", 00:17:52.235 "thread": "nvmf_tgt_poll_group_000", 00:17:52.235 "listen_address": { 00:17:52.235 "trtype": "TCP", 00:17:52.235 "adrfam": "IPv4", 00:17:52.235 "traddr": "10.0.0.2", 00:17:52.235 "trsvcid": "4420" 00:17:52.235 }, 00:17:52.235 "peer_address": { 00:17:52.235 "trtype": "TCP", 00:17:52.235 "adrfam": "IPv4", 00:17:52.235 "traddr": "10.0.0.1", 00:17:52.235 "trsvcid": "57534" 00:17:52.235 }, 00:17:52.235 "auth": { 00:17:52.235 "state": "completed", 00:17:52.235 "digest": "sha256", 00:17:52.235 "dhgroup": "ffdhe8192" 00:17:52.235 } 00:17:52.235 } 00:17:52.235 ]' 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.235 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.495 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:17:53.068 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.068 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.068 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.068 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.068 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.068 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.068 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.068 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.330 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.902 00:17:53.902 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.902 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.902 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.164 { 00:17:54.164 "cntlid": 43, 00:17:54.164 "qid": 0, 00:17:54.164 "state": "enabled", 00:17:54.164 "thread": "nvmf_tgt_poll_group_000", 00:17:54.164 "listen_address": { 00:17:54.164 "trtype": "TCP", 00:17:54.164 "adrfam": "IPv4", 00:17:54.164 "traddr": "10.0.0.2", 00:17:54.164 "trsvcid": "4420" 00:17:54.164 }, 00:17:54.164 "peer_address": { 00:17:54.164 "trtype": "TCP", 00:17:54.164 "adrfam": "IPv4", 00:17:54.164 "traddr": "10.0.0.1", 00:17:54.164 "trsvcid": "39982" 00:17:54.164 }, 00:17:54.164 "auth": { 00:17:54.164 "state": "completed", 00:17:54.164 "digest": "sha256", 00:17:54.164 "dhgroup": "ffdhe8192" 00:17:54.164 } 00:17:54.164 } 00:17:54.164 ]' 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.164 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.425 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:17:54.997 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.997 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.997 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.997 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.258 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.831 00:17:55.831 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.831 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.831 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.092 { 00:17:56.092 "cntlid": 45, 00:17:56.092 "qid": 0, 00:17:56.092 "state": "enabled", 00:17:56.092 "thread": "nvmf_tgt_poll_group_000", 00:17:56.092 "listen_address": { 00:17:56.092 "trtype": "TCP", 00:17:56.092 "adrfam": "IPv4", 00:17:56.092 "traddr": "10.0.0.2", 00:17:56.092 "trsvcid": "4420" 00:17:56.092 }, 00:17:56.092 "peer_address": { 00:17:56.092 "trtype": "TCP", 00:17:56.092 "adrfam": "IPv4", 00:17:56.092 "traddr": "10.0.0.1", 00:17:56.092 "trsvcid": "40020" 00:17:56.092 }, 00:17:56.092 "auth": { 00:17:56.092 "state": "completed", 00:17:56.092 "digest": "sha256", 00:17:56.092 "dhgroup": "ffdhe8192" 00:17:56.092 } 00:17:56.092 } 00:17:56.092 ]' 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.092 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.353 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:17:56.924 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.185 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.186 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.186 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.186 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.757 00:17:57.757 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.757 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.757 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.018 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.018 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.018 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.018 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.018 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.018 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.018 { 00:17:58.018 "cntlid": 47, 00:17:58.018 "qid": 0, 00:17:58.018 "state": "enabled", 00:17:58.018 "thread": "nvmf_tgt_poll_group_000", 00:17:58.018 "listen_address": { 00:17:58.018 "trtype": "TCP", 00:17:58.018 "adrfam": "IPv4", 00:17:58.018 "traddr": "10.0.0.2", 00:17:58.018 "trsvcid": "4420" 00:17:58.018 }, 00:17:58.018 "peer_address": { 00:17:58.018 "trtype": "TCP", 00:17:58.018 "adrfam": "IPv4", 00:17:58.018 "traddr": "10.0.0.1", 00:17:58.018 "trsvcid": "40060" 00:17:58.018 }, 00:17:58.018 "auth": { 00:17:58.018 "state": "completed", 00:17:58.018 "digest": "sha256", 00:17:58.018 "dhgroup": "ffdhe8192" 00:17:58.018 } 00:17:58.018 } 00:17:58.018 ]' 00:17:58.018 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.018 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.018 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.018 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.018 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.018 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.018 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.018 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.279 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:17:58.851 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.851 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.852 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.852 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.112 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.373 00:17:59.373 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.373 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.373 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.634 { 00:17:59.634 "cntlid": 49, 00:17:59.634 "qid": 0, 00:17:59.634 "state": "enabled", 00:17:59.634 "thread": "nvmf_tgt_poll_group_000", 00:17:59.634 "listen_address": { 00:17:59.634 "trtype": "TCP", 00:17:59.634 "adrfam": "IPv4", 00:17:59.634 "traddr": "10.0.0.2", 00:17:59.634 "trsvcid": "4420" 00:17:59.634 }, 00:17:59.634 "peer_address": { 00:17:59.634 "trtype": "TCP", 00:17:59.634 "adrfam": "IPv4", 00:17:59.634 "traddr": "10.0.0.1", 00:17:59.634 "trsvcid": "40076" 00:17:59.634 }, 00:17:59.634 "auth": { 00:17:59.634 "state": "completed", 00:17:59.634 "digest": "sha384", 00:17:59.634 "dhgroup": "null" 00:17:59.634 } 00:17:59.634 } 00:17:59.634 ]' 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.634 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.895 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.839 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.840 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.100 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.100 { 00:18:01.100 "cntlid": 51, 00:18:01.100 "qid": 0, 00:18:01.100 "state": "enabled", 00:18:01.100 "thread": "nvmf_tgt_poll_group_000", 00:18:01.100 "listen_address": { 00:18:01.100 "trtype": "TCP", 00:18:01.100 "adrfam": "IPv4", 00:18:01.100 "traddr": "10.0.0.2", 00:18:01.100 "trsvcid": "4420" 00:18:01.100 }, 00:18:01.100 "peer_address": { 00:18:01.100 "trtype": "TCP", 00:18:01.100 "adrfam": "IPv4", 00:18:01.100 "traddr": "10.0.0.1", 00:18:01.100 "trsvcid": "40108" 00:18:01.100 }, 00:18:01.100 "auth": { 00:18:01.100 "state": "completed", 00:18:01.100 "digest": "sha384", 00:18:01.100 "dhgroup": "null" 00:18:01.100 } 00:18:01.100 } 00:18:01.100 ]' 00:18:01.100 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.361 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.361 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:01.361 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.361 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.361 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.361 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.620 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:18:02.193 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.193 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.193 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.193 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.193 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.193 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.193 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.193 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.454 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.715 00:18:02.715 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.715 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.715 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.715 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.715 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.715 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.715 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.977 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.977 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.977 { 00:18:02.977 "cntlid": 53, 00:18:02.977 "qid": 0, 00:18:02.977 "state": "enabled", 00:18:02.977 "thread": "nvmf_tgt_poll_group_000", 00:18:02.977 "listen_address": { 00:18:02.977 "trtype": "TCP", 00:18:02.977 "adrfam": "IPv4", 00:18:02.977 "traddr": "10.0.0.2", 00:18:02.977 "trsvcid": "4420" 00:18:02.977 }, 00:18:02.977 "peer_address": { 00:18:02.977 "trtype": "TCP", 00:18:02.977 "adrfam": "IPv4", 00:18:02.977 "traddr": "10.0.0.1", 00:18:02.977 "trsvcid": "56220" 00:18:02.977 }, 00:18:02.977 "auth": { 00:18:02.977 "state": "completed", 00:18:02.977 "digest": "sha384", 00:18:02.977 "dhgroup": "null" 00:18:02.977 } 00:18:02.977 } 00:18:02.977 ]' 00:18:02.977 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.977 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.977 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.977 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.977 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.977 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.977 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.977 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.237 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:18:03.810 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.810 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.810 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.810 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.810 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.810 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.810 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.810 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.071 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.332 00:18:04.332 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.332 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.332 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.332 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.332 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.332 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.332 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.332 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.332 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.332 { 00:18:04.332 "cntlid": 55, 00:18:04.332 "qid": 0, 00:18:04.332 "state": "enabled", 00:18:04.332 "thread": "nvmf_tgt_poll_group_000", 00:18:04.332 "listen_address": { 00:18:04.332 "trtype": "TCP", 00:18:04.332 "adrfam": "IPv4", 00:18:04.332 "traddr": "10.0.0.2", 00:18:04.332 "trsvcid": "4420" 00:18:04.332 }, 00:18:04.332 "peer_address": { 00:18:04.332 "trtype": "TCP", 00:18:04.333 "adrfam": "IPv4", 00:18:04.333 "traddr": "10.0.0.1", 00:18:04.333 "trsvcid": "56246" 00:18:04.333 }, 00:18:04.333 "auth": { 00:18:04.333 "state": "completed", 00:18:04.333 "digest": "sha384", 00:18:04.333 "dhgroup": "null" 00:18:04.333 } 00:18:04.333 } 00:18:04.333 ]' 00:18:04.333 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.594 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.594 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.594 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:04.594 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.594 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.594 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.594 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.855 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:18:05.428 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.428 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.428 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.428 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.428 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.428 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.428 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.428 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.428 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.733 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.734 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.019 00:18:06.019 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.019 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.019 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.019 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.019 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.019 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.019 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.019 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.019 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.019 { 00:18:06.019 "cntlid": 57, 00:18:06.019 "qid": 0, 00:18:06.019 "state": "enabled", 00:18:06.019 "thread": "nvmf_tgt_poll_group_000", 00:18:06.019 "listen_address": { 00:18:06.019 "trtype": "TCP", 00:18:06.019 "adrfam": "IPv4", 00:18:06.019 "traddr": "10.0.0.2", 00:18:06.019 "trsvcid": "4420" 00:18:06.019 }, 00:18:06.019 "peer_address": { 00:18:06.019 "trtype": "TCP", 00:18:06.019 "adrfam": "IPv4", 00:18:06.019 "traddr": "10.0.0.1", 00:18:06.019 "trsvcid": "56262" 00:18:06.019 }, 00:18:06.019 "auth": { 00:18:06.019 "state": "completed", 00:18:06.019 "digest": "sha384", 00:18:06.019 "dhgroup": "ffdhe2048" 00:18:06.019 } 00:18:06.019 } 00:18:06.019 ]' 00:18:06.019 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.019 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.019 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.280 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.280 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.280 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.280 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.280 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.280 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.222 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.483 00:18:07.483 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.483 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.483 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.743 { 00:18:07.743 "cntlid": 59, 00:18:07.743 "qid": 0, 00:18:07.743 "state": "enabled", 00:18:07.743 "thread": "nvmf_tgt_poll_group_000", 00:18:07.743 "listen_address": { 00:18:07.743 "trtype": "TCP", 00:18:07.743 "adrfam": "IPv4", 00:18:07.743 "traddr": "10.0.0.2", 00:18:07.743 "trsvcid": "4420" 00:18:07.743 }, 00:18:07.743 "peer_address": { 00:18:07.743 "trtype": "TCP", 00:18:07.743 "adrfam": "IPv4", 00:18:07.743 "traddr": "10.0.0.1", 00:18:07.743 "trsvcid": "56292" 00:18:07.743 }, 00:18:07.743 "auth": { 00:18:07.743 "state": "completed", 00:18:07.743 "digest": "sha384", 00:18:07.743 "dhgroup": "ffdhe2048" 00:18:07.743 } 00:18:07.743 } 00:18:07.743 ]' 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.743 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.004 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:18:08.576 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.576 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.576 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.576 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.576 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.576 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.576 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.576 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.838 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.098 00:18:09.098 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.098 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.098 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.359 { 00:18:09.359 "cntlid": 61, 00:18:09.359 "qid": 0, 00:18:09.359 "state": "enabled", 00:18:09.359 "thread": "nvmf_tgt_poll_group_000", 00:18:09.359 "listen_address": { 00:18:09.359 "trtype": "TCP", 00:18:09.359 "adrfam": "IPv4", 00:18:09.359 "traddr": "10.0.0.2", 00:18:09.359 "trsvcid": "4420" 00:18:09.359 }, 00:18:09.359 "peer_address": { 00:18:09.359 "trtype": "TCP", 00:18:09.359 "adrfam": "IPv4", 00:18:09.359 "traddr": "10.0.0.1", 00:18:09.359 "trsvcid": "56318" 00:18:09.359 }, 00:18:09.359 "auth": { 00:18:09.359 "state": "completed", 00:18:09.359 "digest": "sha384", 00:18:09.359 "dhgroup": "ffdhe2048" 00:18:09.359 } 00:18:09.359 } 00:18:09.359 ]' 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.359 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.620 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.563 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.825 00:18:10.825 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.825 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.825 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.825 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.825 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.825 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.825 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.825 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.825 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.825 { 00:18:10.825 "cntlid": 63, 00:18:10.825 "qid": 0, 00:18:10.825 "state": "enabled", 00:18:10.825 "thread": "nvmf_tgt_poll_group_000", 00:18:10.825 "listen_address": { 00:18:10.825 "trtype": "TCP", 00:18:10.825 "adrfam": "IPv4", 00:18:10.825 "traddr": "10.0.0.2", 00:18:10.825 "trsvcid": "4420" 00:18:10.825 }, 00:18:10.825 "peer_address": { 00:18:10.825 "trtype": "TCP", 00:18:10.825 "adrfam": "IPv4", 00:18:10.825 "traddr": "10.0.0.1", 00:18:10.825 "trsvcid": "56348" 00:18:10.825 }, 00:18:10.825 "auth": { 00:18:10.825 "state": "completed", 00:18:10.825 "digest": "sha384", 00:18:10.825 "dhgroup": "ffdhe2048" 00:18:10.825 } 00:18:10.825 } 00:18:10.825 ]' 00:18:11.086 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.086 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.086 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.086 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.086 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.086 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.086 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.086 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.347 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:18:11.917 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.917 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.917 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.917 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.917 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.917 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.917 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.917 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:11.917 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.178 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.438 00:18:12.438 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.438 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.439 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.700 { 00:18:12.700 "cntlid": 65, 00:18:12.700 "qid": 0, 00:18:12.700 "state": "enabled", 00:18:12.700 "thread": "nvmf_tgt_poll_group_000", 00:18:12.700 "listen_address": { 00:18:12.700 "trtype": "TCP", 00:18:12.700 "adrfam": "IPv4", 00:18:12.700 "traddr": "10.0.0.2", 00:18:12.700 "trsvcid": "4420" 00:18:12.700 }, 00:18:12.700 "peer_address": { 00:18:12.700 "trtype": "TCP", 00:18:12.700 "adrfam": "IPv4", 00:18:12.700 "traddr": "10.0.0.1", 00:18:12.700 "trsvcid": "52656" 00:18:12.700 }, 00:18:12.700 "auth": { 00:18:12.700 "state": "completed", 00:18:12.700 "digest": "sha384", 00:18:12.700 "dhgroup": "ffdhe3072" 00:18:12.700 } 00:18:12.700 } 00:18:12.700 ]' 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.700 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.971 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:18:13.548 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.548 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.548 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.548 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.548 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.548 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.548 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.548 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.809 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.070 00:18:14.070 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.070 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.070 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.332 { 00:18:14.332 "cntlid": 67, 00:18:14.332 "qid": 0, 00:18:14.332 "state": "enabled", 00:18:14.332 "thread": "nvmf_tgt_poll_group_000", 00:18:14.332 "listen_address": { 00:18:14.332 "trtype": "TCP", 00:18:14.332 "adrfam": "IPv4", 00:18:14.332 "traddr": "10.0.0.2", 00:18:14.332 "trsvcid": "4420" 00:18:14.332 }, 00:18:14.332 "peer_address": { 00:18:14.332 "trtype": "TCP", 00:18:14.332 "adrfam": "IPv4", 00:18:14.332 "traddr": "10.0.0.1", 00:18:14.332 "trsvcid": "52674" 00:18:14.332 }, 00:18:14.332 "auth": { 00:18:14.332 "state": "completed", 00:18:14.332 "digest": "sha384", 00:18:14.332 "dhgroup": "ffdhe3072" 00:18:14.332 } 00:18:14.332 } 00:18:14.332 ]' 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.332 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.593 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:18:15.165 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.425 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.425 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.425 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.425 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.425 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.426 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.686 00:18:15.686 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.686 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.686 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.946 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.946 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.946 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.946 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.946 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.946 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.946 { 00:18:15.946 "cntlid": 69, 00:18:15.946 "qid": 0, 00:18:15.946 "state": "enabled", 00:18:15.947 "thread": "nvmf_tgt_poll_group_000", 00:18:15.947 "listen_address": { 00:18:15.947 "trtype": "TCP", 00:18:15.947 "adrfam": "IPv4", 00:18:15.947 "traddr": "10.0.0.2", 00:18:15.947 "trsvcid": "4420" 00:18:15.947 }, 00:18:15.947 "peer_address": { 00:18:15.947 "trtype": "TCP", 00:18:15.947 "adrfam": "IPv4", 00:18:15.947 "traddr": "10.0.0.1", 00:18:15.947 "trsvcid": "52692" 00:18:15.947 }, 00:18:15.947 "auth": { 00:18:15.947 "state": "completed", 00:18:15.947 "digest": "sha384", 00:18:15.947 "dhgroup": "ffdhe3072" 00:18:15.947 } 00:18:15.947 } 00:18:15.947 ]' 00:18:15.947 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.947 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.947 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.947 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.947 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.947 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.947 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.947 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.207 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.148 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.409 00:18:17.409 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.409 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.409 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.671 { 00:18:17.671 "cntlid": 71, 00:18:17.671 "qid": 0, 00:18:17.671 "state": "enabled", 00:18:17.671 "thread": "nvmf_tgt_poll_group_000", 00:18:17.671 "listen_address": { 00:18:17.671 "trtype": "TCP", 00:18:17.671 "adrfam": "IPv4", 00:18:17.671 "traddr": "10.0.0.2", 00:18:17.671 "trsvcid": "4420" 00:18:17.671 }, 00:18:17.671 "peer_address": { 00:18:17.671 "trtype": "TCP", 00:18:17.671 "adrfam": "IPv4", 00:18:17.671 "traddr": "10.0.0.1", 00:18:17.671 "trsvcid": "52712" 00:18:17.671 }, 00:18:17.671 "auth": { 00:18:17.671 "state": "completed", 00:18:17.671 "digest": "sha384", 00:18:17.671 "dhgroup": "ffdhe3072" 00:18:17.671 } 00:18:17.671 } 00:18:17.671 ]' 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.671 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.931 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:18:18.500 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.761 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.021 00:18:19.021 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.021 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.021 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.281 { 00:18:19.281 "cntlid": 73, 00:18:19.281 "qid": 0, 00:18:19.281 "state": "enabled", 00:18:19.281 "thread": "nvmf_tgt_poll_group_000", 00:18:19.281 "listen_address": { 00:18:19.281 "trtype": "TCP", 00:18:19.281 "adrfam": "IPv4", 00:18:19.281 "traddr": "10.0.0.2", 00:18:19.281 "trsvcid": "4420" 00:18:19.281 }, 00:18:19.281 "peer_address": { 00:18:19.281 "trtype": "TCP", 00:18:19.281 "adrfam": "IPv4", 00:18:19.281 "traddr": "10.0.0.1", 00:18:19.281 "trsvcid": "52742" 00:18:19.281 }, 00:18:19.281 "auth": { 00:18:19.281 "state": "completed", 00:18:19.281 "digest": "sha384", 00:18:19.281 "dhgroup": "ffdhe4096" 00:18:19.281 } 00:18:19.281 } 00:18:19.281 ]' 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.281 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.542 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.481 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.482 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.482 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.742 00:18:20.742 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.742 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.742 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.002 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.002 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.002 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.002 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.002 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.002 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.002 { 00:18:21.002 "cntlid": 75, 00:18:21.002 "qid": 0, 00:18:21.002 "state": "enabled", 00:18:21.002 "thread": "nvmf_tgt_poll_group_000", 00:18:21.002 "listen_address": { 00:18:21.002 "trtype": "TCP", 00:18:21.002 "adrfam": "IPv4", 00:18:21.002 "traddr": "10.0.0.2", 00:18:21.002 "trsvcid": "4420" 00:18:21.002 }, 00:18:21.002 "peer_address": { 00:18:21.002 "trtype": "TCP", 00:18:21.002 "adrfam": "IPv4", 00:18:21.002 "traddr": "10.0.0.1", 00:18:21.002 "trsvcid": "52770" 00:18:21.002 }, 00:18:21.002 "auth": { 00:18:21.002 "state": "completed", 00:18:21.002 "digest": "sha384", 00:18:21.002 "dhgroup": "ffdhe4096" 00:18:21.002 } 00:18:21.002 } 00:18:21.002 ]' 00:18:21.002 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.002 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.002 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.002 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.002 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.002 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.002 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.002 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.263 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.204 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.465 00:18:22.465 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.465 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.465 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.725 { 00:18:22.725 "cntlid": 77, 00:18:22.725 "qid": 0, 00:18:22.725 "state": "enabled", 00:18:22.725 "thread": "nvmf_tgt_poll_group_000", 00:18:22.725 "listen_address": { 00:18:22.725 "trtype": "TCP", 00:18:22.725 "adrfam": "IPv4", 00:18:22.725 "traddr": "10.0.0.2", 00:18:22.725 "trsvcid": "4420" 00:18:22.725 }, 00:18:22.725 "peer_address": { 00:18:22.725 "trtype": "TCP", 00:18:22.725 "adrfam": "IPv4", 00:18:22.725 "traddr": "10.0.0.1", 00:18:22.725 "trsvcid": "54192" 00:18:22.725 }, 00:18:22.725 "auth": { 00:18:22.725 "state": "completed", 00:18:22.725 "digest": "sha384", 00:18:22.725 "dhgroup": "ffdhe4096" 00:18:22.725 } 00:18:22.725 } 00:18:22.725 ]' 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.725 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.022 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:18:23.593 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.593 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.593 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.593 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.593 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.593 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.593 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:23.593 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.854 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.114 00:18:24.114 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.114 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.114 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.374 { 00:18:24.374 "cntlid": 79, 00:18:24.374 "qid": 0, 00:18:24.374 "state": "enabled", 00:18:24.374 "thread": "nvmf_tgt_poll_group_000", 00:18:24.374 "listen_address": { 00:18:24.374 "trtype": "TCP", 00:18:24.374 "adrfam": "IPv4", 00:18:24.374 "traddr": "10.0.0.2", 00:18:24.374 "trsvcid": "4420" 00:18:24.374 }, 00:18:24.374 "peer_address": { 00:18:24.374 "trtype": "TCP", 00:18:24.374 "adrfam": "IPv4", 00:18:24.374 "traddr": "10.0.0.1", 00:18:24.374 "trsvcid": "54228" 00:18:24.374 }, 00:18:24.374 "auth": { 00:18:24.374 "state": "completed", 00:18:24.374 "digest": "sha384", 00:18:24.374 "dhgroup": "ffdhe4096" 00:18:24.374 } 00:18:24.374 } 00:18:24.374 ]' 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.374 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.634 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.575 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.576 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.836 00:18:25.836 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.836 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.836 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.098 { 00:18:26.098 "cntlid": 81, 00:18:26.098 "qid": 0, 00:18:26.098 "state": "enabled", 00:18:26.098 "thread": "nvmf_tgt_poll_group_000", 00:18:26.098 "listen_address": { 00:18:26.098 "trtype": "TCP", 00:18:26.098 "adrfam": "IPv4", 00:18:26.098 "traddr": "10.0.0.2", 00:18:26.098 "trsvcid": "4420" 00:18:26.098 }, 00:18:26.098 "peer_address": { 00:18:26.098 "trtype": "TCP", 00:18:26.098 "adrfam": "IPv4", 00:18:26.098 "traddr": "10.0.0.1", 00:18:26.098 "trsvcid": "54262" 00:18:26.098 }, 00:18:26.098 "auth": { 00:18:26.098 "state": "completed", 00:18:26.098 "digest": "sha384", 00:18:26.098 "dhgroup": "ffdhe6144" 00:18:26.098 } 00:18:26.098 } 00:18:26.098 ]' 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.098 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.358 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.358 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.358 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.358 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.300 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.876 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.876 { 00:18:27.876 "cntlid": 83, 00:18:27.876 "qid": 0, 00:18:27.876 "state": "enabled", 00:18:27.876 "thread": "nvmf_tgt_poll_group_000", 00:18:27.876 "listen_address": { 00:18:27.876 "trtype": "TCP", 00:18:27.876 "adrfam": "IPv4", 00:18:27.876 "traddr": "10.0.0.2", 00:18:27.876 "trsvcid": "4420" 00:18:27.876 }, 00:18:27.876 "peer_address": { 00:18:27.876 "trtype": "TCP", 00:18:27.876 "adrfam": "IPv4", 00:18:27.876 "traddr": "10.0.0.1", 00:18:27.876 "trsvcid": "54290" 00:18:27.876 }, 00:18:27.876 "auth": { 00:18:27.876 "state": "completed", 00:18:27.876 "digest": "sha384", 00:18:27.876 "dhgroup": "ffdhe6144" 00:18:27.876 } 00:18:27.876 } 00:18:27.876 ]' 00:18:27.876 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.876 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.876 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.876 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.876 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.135 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.135 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.135 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.136 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:18:29.073 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.073 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.074 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:29.074 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.074 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.074 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.074 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.074 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.074 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.074 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.074 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.642 00:18:29.642 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.642 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.642 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.643 { 00:18:29.643 "cntlid": 85, 00:18:29.643 "qid": 0, 00:18:29.643 "state": "enabled", 00:18:29.643 "thread": "nvmf_tgt_poll_group_000", 00:18:29.643 "listen_address": { 00:18:29.643 "trtype": "TCP", 00:18:29.643 "adrfam": "IPv4", 00:18:29.643 "traddr": "10.0.0.2", 00:18:29.643 "trsvcid": "4420" 00:18:29.643 }, 00:18:29.643 "peer_address": { 00:18:29.643 "trtype": "TCP", 00:18:29.643 "adrfam": "IPv4", 00:18:29.643 "traddr": "10.0.0.1", 00:18:29.643 "trsvcid": "54330" 00:18:29.643 }, 00:18:29.643 "auth": { 00:18:29.643 "state": "completed", 00:18:29.643 "digest": "sha384", 00:18:29.643 "dhgroup": "ffdhe6144" 00:18:29.643 } 00:18:29.643 } 00:18:29.643 ]' 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.643 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.903 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.903 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.903 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.903 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.843 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.412 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.412 { 00:18:31.412 "cntlid": 87, 00:18:31.412 "qid": 0, 00:18:31.412 "state": "enabled", 00:18:31.412 "thread": "nvmf_tgt_poll_group_000", 00:18:31.412 "listen_address": { 00:18:31.412 "trtype": "TCP", 00:18:31.412 "adrfam": "IPv4", 00:18:31.412 "traddr": "10.0.0.2", 00:18:31.412 "trsvcid": "4420" 00:18:31.412 }, 00:18:31.412 "peer_address": { 00:18:31.412 "trtype": "TCP", 00:18:31.412 "adrfam": "IPv4", 00:18:31.412 "traddr": "10.0.0.1", 00:18:31.412 "trsvcid": "54366" 00:18:31.412 }, 00:18:31.412 "auth": { 00:18:31.412 "state": "completed", 00:18:31.412 "digest": "sha384", 00:18:31.412 "dhgroup": "ffdhe6144" 00:18:31.412 } 00:18:31.412 } 00:18:31.412 ]' 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:31.412 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.671 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.671 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.671 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.671 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:18:32.611 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.612 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.181 00:18:33.181 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.181 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.181 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.442 { 00:18:33.442 "cntlid": 89, 00:18:33.442 "qid": 0, 00:18:33.442 "state": "enabled", 00:18:33.442 "thread": "nvmf_tgt_poll_group_000", 00:18:33.442 "listen_address": { 00:18:33.442 "trtype": "TCP", 00:18:33.442 "adrfam": "IPv4", 00:18:33.442 "traddr": "10.0.0.2", 00:18:33.442 "trsvcid": "4420" 00:18:33.442 }, 00:18:33.442 "peer_address": { 00:18:33.442 "trtype": "TCP", 00:18:33.442 "adrfam": "IPv4", 00:18:33.442 "traddr": "10.0.0.1", 00:18:33.442 "trsvcid": "59416" 00:18:33.442 }, 00:18:33.442 "auth": { 00:18:33.442 "state": "completed", 00:18:33.442 "digest": "sha384", 00:18:33.442 "dhgroup": "ffdhe8192" 00:18:33.442 } 00:18:33.442 } 00:18:33.442 ]' 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.442 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.702 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:18:34.271 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.531 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.531 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.531 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.531 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.531 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.532 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.102 00:18:35.102 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.102 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.102 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.362 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.362 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.362 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.362 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.362 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.362 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.362 { 00:18:35.362 "cntlid": 91, 00:18:35.362 "qid": 0, 00:18:35.362 "state": "enabled", 00:18:35.362 "thread": "nvmf_tgt_poll_group_000", 00:18:35.362 "listen_address": { 00:18:35.362 "trtype": "TCP", 00:18:35.362 "adrfam": "IPv4", 00:18:35.362 "traddr": "10.0.0.2", 00:18:35.362 "trsvcid": "4420" 00:18:35.362 }, 00:18:35.362 "peer_address": { 00:18:35.362 "trtype": "TCP", 00:18:35.362 "adrfam": "IPv4", 00:18:35.362 "traddr": "10.0.0.1", 00:18:35.362 "trsvcid": "59440" 00:18:35.362 }, 00:18:35.362 "auth": { 00:18:35.362 "state": "completed", 00:18:35.362 "digest": "sha384", 00:18:35.362 "dhgroup": "ffdhe8192" 00:18:35.363 } 00:18:35.363 } 00:18:35.363 ]' 00:18:35.363 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.363 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.363 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.363 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.363 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.363 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.363 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.363 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.622 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.564 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.135 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.135 { 00:18:37.135 "cntlid": 93, 00:18:37.135 "qid": 0, 00:18:37.135 "state": "enabled", 00:18:37.135 "thread": "nvmf_tgt_poll_group_000", 00:18:37.135 "listen_address": { 00:18:37.135 "trtype": "TCP", 00:18:37.135 "adrfam": "IPv4", 00:18:37.135 "traddr": "10.0.0.2", 00:18:37.135 "trsvcid": "4420" 00:18:37.135 }, 00:18:37.135 "peer_address": { 00:18:37.135 "trtype": "TCP", 00:18:37.135 "adrfam": "IPv4", 00:18:37.135 "traddr": "10.0.0.1", 00:18:37.135 "trsvcid": "59482" 00:18:37.135 }, 00:18:37.135 "auth": { 00:18:37.135 "state": "completed", 00:18:37.135 "digest": "sha384", 00:18:37.135 "dhgroup": "ffdhe8192" 00:18:37.135 } 00:18:37.135 } 00:18:37.135 ]' 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.135 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.394 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.394 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.394 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.394 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.394 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.394 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.335 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.905 00:18:38.905 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.905 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.905 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.166 { 00:18:39.166 "cntlid": 95, 00:18:39.166 "qid": 0, 00:18:39.166 "state": "enabled", 00:18:39.166 "thread": "nvmf_tgt_poll_group_000", 00:18:39.166 "listen_address": { 00:18:39.166 "trtype": "TCP", 00:18:39.166 "adrfam": "IPv4", 00:18:39.166 "traddr": "10.0.0.2", 00:18:39.166 "trsvcid": "4420" 00:18:39.166 }, 00:18:39.166 "peer_address": { 00:18:39.166 "trtype": "TCP", 00:18:39.166 "adrfam": "IPv4", 00:18:39.166 "traddr": "10.0.0.1", 00:18:39.166 "trsvcid": "59524" 00:18:39.166 }, 00:18:39.166 "auth": { 00:18:39.166 "state": "completed", 00:18:39.166 "digest": "sha384", 00:18:39.166 "dhgroup": "ffdhe8192" 00:18:39.166 } 00:18:39.166 } 00:18:39.166 ]' 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.166 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.426 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.366 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.367 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.627 00:18:40.627 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.627 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.627 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.888 { 00:18:40.888 "cntlid": 97, 00:18:40.888 "qid": 0, 00:18:40.888 "state": "enabled", 00:18:40.888 "thread": "nvmf_tgt_poll_group_000", 00:18:40.888 "listen_address": { 00:18:40.888 "trtype": "TCP", 00:18:40.888 "adrfam": "IPv4", 00:18:40.888 "traddr": "10.0.0.2", 00:18:40.888 "trsvcid": "4420" 00:18:40.888 }, 00:18:40.888 "peer_address": { 00:18:40.888 "trtype": "TCP", 00:18:40.888 "adrfam": "IPv4", 00:18:40.888 "traddr": "10.0.0.1", 00:18:40.888 "trsvcid": "59556" 00:18:40.888 }, 00:18:40.888 "auth": { 00:18:40.888 "state": "completed", 00:18:40.888 "digest": "sha512", 00:18:40.888 "dhgroup": "null" 00:18:40.888 } 00:18:40.888 } 00:18:40.888 ]' 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.888 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.168 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:18:41.744 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.744 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.744 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.744 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.744 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.744 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.744 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.744 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.004 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:42.004 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.004 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.004 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.004 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.004 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.004 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.005 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.005 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.005 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.005 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.005 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.265 00:18:42.265 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.265 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.265 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.265 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.265 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.265 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.265 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.525 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.525 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.525 { 00:18:42.525 "cntlid": 99, 00:18:42.525 "qid": 0, 00:18:42.525 "state": "enabled", 00:18:42.525 "thread": "nvmf_tgt_poll_group_000", 00:18:42.525 "listen_address": { 00:18:42.525 "trtype": "TCP", 00:18:42.525 "adrfam": "IPv4", 00:18:42.525 "traddr": "10.0.0.2", 00:18:42.525 "trsvcid": "4420" 00:18:42.525 }, 00:18:42.525 "peer_address": { 00:18:42.525 "trtype": "TCP", 00:18:42.525 "adrfam": "IPv4", 00:18:42.525 "traddr": "10.0.0.1", 00:18:42.525 "trsvcid": "45602" 00:18:42.525 }, 00:18:42.525 "auth": { 00:18:42.525 "state": "completed", 00:18:42.525 "digest": "sha512", 00:18:42.525 "dhgroup": "null" 00:18:42.525 } 00:18:42.525 } 00:18:42.525 ]' 00:18:42.525 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.525 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.525 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.526 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:42.526 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.526 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.526 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.526 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.786 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:18:43.356 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.356 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.356 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.356 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.356 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.356 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.356 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:43.356 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.617 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.877 00:18:43.877 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.877 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.877 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.137 { 00:18:44.137 "cntlid": 101, 00:18:44.137 "qid": 0, 00:18:44.137 "state": "enabled", 00:18:44.137 "thread": "nvmf_tgt_poll_group_000", 00:18:44.137 "listen_address": { 00:18:44.137 "trtype": "TCP", 00:18:44.137 "adrfam": "IPv4", 00:18:44.137 "traddr": "10.0.0.2", 00:18:44.137 "trsvcid": "4420" 00:18:44.137 }, 00:18:44.137 "peer_address": { 00:18:44.137 "trtype": "TCP", 00:18:44.137 "adrfam": "IPv4", 00:18:44.137 "traddr": "10.0.0.1", 00:18:44.137 "trsvcid": "45618" 00:18:44.137 }, 00:18:44.137 "auth": { 00:18:44.137 "state": "completed", 00:18:44.137 "digest": "sha512", 00:18:44.137 "dhgroup": "null" 00:18:44.137 } 00:18:44.137 } 00:18:44.137 ]' 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.137 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.398 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:18:44.970 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.970 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.970 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.970 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.970 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.970 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.970 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:44.970 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:45.231 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.232 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.492 00:18:45.492 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.492 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.492 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.753 { 00:18:45.753 "cntlid": 103, 00:18:45.753 "qid": 0, 00:18:45.753 "state": "enabled", 00:18:45.753 "thread": "nvmf_tgt_poll_group_000", 00:18:45.753 "listen_address": { 00:18:45.753 "trtype": "TCP", 00:18:45.753 "adrfam": "IPv4", 00:18:45.753 "traddr": "10.0.0.2", 00:18:45.753 "trsvcid": "4420" 00:18:45.753 }, 00:18:45.753 "peer_address": { 00:18:45.753 "trtype": "TCP", 00:18:45.753 "adrfam": "IPv4", 00:18:45.753 "traddr": "10.0.0.1", 00:18:45.753 "trsvcid": "45648" 00:18:45.753 }, 00:18:45.753 "auth": { 00:18:45.753 "state": "completed", 00:18:45.753 "digest": "sha512", 00:18:45.753 "dhgroup": "null" 00:18:45.753 } 00:18:45.753 } 00:18:45.753 ]' 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.753 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.014 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:18:46.586 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.586 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.586 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.586 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.586 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.586 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.586 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.586 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.586 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.847 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.108 00:18:47.108 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.108 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.108 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.368 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.368 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.368 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.369 { 00:18:47.369 "cntlid": 105, 00:18:47.369 "qid": 0, 00:18:47.369 "state": "enabled", 00:18:47.369 "thread": "nvmf_tgt_poll_group_000", 00:18:47.369 "listen_address": { 00:18:47.369 "trtype": "TCP", 00:18:47.369 "adrfam": "IPv4", 00:18:47.369 "traddr": "10.0.0.2", 00:18:47.369 "trsvcid": "4420" 00:18:47.369 }, 00:18:47.369 "peer_address": { 00:18:47.369 "trtype": "TCP", 00:18:47.369 "adrfam": "IPv4", 00:18:47.369 "traddr": "10.0.0.1", 00:18:47.369 "trsvcid": "45676" 00:18:47.369 }, 00:18:47.369 "auth": { 00:18:47.369 "state": "completed", 00:18:47.369 "digest": "sha512", 00:18:47.369 "dhgroup": "ffdhe2048" 00:18:47.369 } 00:18:47.369 } 00:18:47.369 ]' 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.369 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.629 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:18:48.201 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.462 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.723 00:18:48.723 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.723 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.723 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.985 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.985 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.985 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.985 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.985 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.985 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.985 { 00:18:48.985 "cntlid": 107, 00:18:48.985 "qid": 0, 00:18:48.985 "state": "enabled", 00:18:48.985 "thread": "nvmf_tgt_poll_group_000", 00:18:48.985 "listen_address": { 00:18:48.985 "trtype": "TCP", 00:18:48.985 "adrfam": "IPv4", 00:18:48.985 "traddr": "10.0.0.2", 00:18:48.985 "trsvcid": "4420" 00:18:48.985 }, 00:18:48.985 "peer_address": { 00:18:48.985 "trtype": "TCP", 00:18:48.985 "adrfam": "IPv4", 00:18:48.985 "traddr": "10.0.0.1", 00:18:48.985 "trsvcid": "45708" 00:18:48.985 }, 00:18:48.985 "auth": { 00:18:48.985 "state": "completed", 00:18:48.985 "digest": "sha512", 00:18:48.985 "dhgroup": "ffdhe2048" 00:18:48.985 } 00:18:48.985 } 00:18:48.985 ]' 00:18:48.985 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.985 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.985 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.985 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.985 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.985 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.985 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.985 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.303 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:18:49.876 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.876 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.876 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.876 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.876 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.876 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.876 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:49.876 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.137 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.398 00:18:50.398 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.398 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.398 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.659 { 00:18:50.659 "cntlid": 109, 00:18:50.659 "qid": 0, 00:18:50.659 "state": "enabled", 00:18:50.659 "thread": "nvmf_tgt_poll_group_000", 00:18:50.659 "listen_address": { 00:18:50.659 "trtype": "TCP", 00:18:50.659 "adrfam": "IPv4", 00:18:50.659 "traddr": "10.0.0.2", 00:18:50.659 "trsvcid": "4420" 00:18:50.659 }, 00:18:50.659 "peer_address": { 00:18:50.659 "trtype": "TCP", 00:18:50.659 "adrfam": "IPv4", 00:18:50.659 "traddr": "10.0.0.1", 00:18:50.659 "trsvcid": "45736" 00:18:50.659 }, 00:18:50.659 "auth": { 00:18:50.659 "state": "completed", 00:18:50.659 "digest": "sha512", 00:18:50.659 "dhgroup": "ffdhe2048" 00:18:50.659 } 00:18:50.659 } 00:18:50.659 ]' 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.659 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.920 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:18:51.491 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.752 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.012 00:18:52.013 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.013 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.013 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.273 { 00:18:52.273 "cntlid": 111, 00:18:52.273 "qid": 0, 00:18:52.273 "state": "enabled", 00:18:52.273 "thread": "nvmf_tgt_poll_group_000", 00:18:52.273 "listen_address": { 00:18:52.273 "trtype": "TCP", 00:18:52.273 "adrfam": "IPv4", 00:18:52.273 "traddr": "10.0.0.2", 00:18:52.273 "trsvcid": "4420" 00:18:52.273 }, 00:18:52.273 "peer_address": { 00:18:52.273 "trtype": "TCP", 00:18:52.273 "adrfam": "IPv4", 00:18:52.273 "traddr": "10.0.0.1", 00:18:52.273 "trsvcid": "60578" 00:18:52.273 }, 00:18:52.273 "auth": { 00:18:52.273 "state": "completed", 00:18:52.273 "digest": "sha512", 00:18:52.273 "dhgroup": "ffdhe2048" 00:18:52.273 } 00:18:52.273 } 00:18:52.273 ]' 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.273 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.534 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.477 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.738 00:18:53.738 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.738 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.738 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.738 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.999 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.999 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.999 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.999 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.999 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.999 { 00:18:53.999 "cntlid": 113, 00:18:53.999 "qid": 0, 00:18:53.999 "state": "enabled", 00:18:53.999 "thread": "nvmf_tgt_poll_group_000", 00:18:53.999 "listen_address": { 00:18:53.999 "trtype": "TCP", 00:18:53.999 "adrfam": "IPv4", 00:18:53.999 "traddr": "10.0.0.2", 00:18:53.999 "trsvcid": "4420" 00:18:53.999 }, 00:18:53.999 "peer_address": { 00:18:53.999 "trtype": "TCP", 00:18:53.999 "adrfam": "IPv4", 00:18:53.999 "traddr": "10.0.0.1", 00:18:53.999 "trsvcid": "60602" 00:18:53.999 }, 00:18:53.999 "auth": { 00:18:53.999 "state": "completed", 00:18:53.999 "digest": "sha512", 00:18:53.999 "dhgroup": "ffdhe3072" 00:18:53.999 } 00:18:53.999 } 00:18:53.999 ]' 00:18:53.999 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.999 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.999 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.999 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.999 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.000 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.000 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.000 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.261 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:18:54.834 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.834 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.834 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.834 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.834 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.834 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.834 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:54.834 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:55.095 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:55.095 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.095 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.095 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:55.095 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.095 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.096 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.096 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.096 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.096 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.096 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.096 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.356 00:18:55.356 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.356 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.356 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.618 { 00:18:55.618 "cntlid": 115, 00:18:55.618 "qid": 0, 00:18:55.618 "state": "enabled", 00:18:55.618 "thread": "nvmf_tgt_poll_group_000", 00:18:55.618 "listen_address": { 00:18:55.618 "trtype": "TCP", 00:18:55.618 "adrfam": "IPv4", 00:18:55.618 "traddr": "10.0.0.2", 00:18:55.618 "trsvcid": "4420" 00:18:55.618 }, 00:18:55.618 "peer_address": { 00:18:55.618 "trtype": "TCP", 00:18:55.618 "adrfam": "IPv4", 00:18:55.618 "traddr": "10.0.0.1", 00:18:55.618 "trsvcid": "60632" 00:18:55.618 }, 00:18:55.618 "auth": { 00:18:55.618 "state": "completed", 00:18:55.618 "digest": "sha512", 00:18:55.618 "dhgroup": "ffdhe3072" 00:18:55.618 } 00:18:55.618 } 00:18:55.618 ]' 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.618 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.879 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.822 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.083 00:18:57.083 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.083 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.083 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.083 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.083 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.083 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.083 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.343 { 00:18:57.343 "cntlid": 117, 00:18:57.343 "qid": 0, 00:18:57.343 "state": "enabled", 00:18:57.343 "thread": "nvmf_tgt_poll_group_000", 00:18:57.343 "listen_address": { 00:18:57.343 "trtype": "TCP", 00:18:57.343 "adrfam": "IPv4", 00:18:57.343 "traddr": "10.0.0.2", 00:18:57.343 "trsvcid": "4420" 00:18:57.343 }, 00:18:57.343 "peer_address": { 00:18:57.343 "trtype": "TCP", 00:18:57.343 "adrfam": "IPv4", 00:18:57.343 "traddr": "10.0.0.1", 00:18:57.343 "trsvcid": "60668" 00:18:57.343 }, 00:18:57.343 "auth": { 00:18:57.343 "state": "completed", 00:18:57.343 "digest": "sha512", 00:18:57.343 "dhgroup": "ffdhe3072" 00:18:57.343 } 00:18:57.343 } 00:18:57.343 ]' 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.343 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.603 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:18:58.229 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.229 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.229 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.229 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.229 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.229 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.229 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:58.229 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.491 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.752 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.752 { 00:18:58.752 "cntlid": 119, 00:18:58.752 "qid": 0, 00:18:58.752 "state": "enabled", 00:18:58.752 "thread": "nvmf_tgt_poll_group_000", 00:18:58.752 "listen_address": { 00:18:58.752 "trtype": "TCP", 00:18:58.752 "adrfam": "IPv4", 00:18:58.752 "traddr": "10.0.0.2", 00:18:58.752 "trsvcid": "4420" 00:18:58.752 }, 00:18:58.752 "peer_address": { 00:18:58.752 "trtype": "TCP", 00:18:58.752 "adrfam": "IPv4", 00:18:58.752 "traddr": "10.0.0.1", 00:18:58.752 "trsvcid": "60706" 00:18:58.752 }, 00:18:58.752 "auth": { 00:18:58.752 "state": "completed", 00:18:58.752 "digest": "sha512", 00:18:58.752 "dhgroup": "ffdhe3072" 00:18:58.752 } 00:18:58.752 } 00:18:58.752 ]' 00:18:58.752 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.012 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.012 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.012 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.012 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.012 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.012 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.012 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.272 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:18:59.843 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.843 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.843 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.843 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.843 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.843 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.843 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.843 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.843 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.103 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.104 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.364 00:19:00.364 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.364 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.364 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.625 { 00:19:00.625 "cntlid": 121, 00:19:00.625 "qid": 0, 00:19:00.625 "state": "enabled", 00:19:00.625 "thread": "nvmf_tgt_poll_group_000", 00:19:00.625 "listen_address": { 00:19:00.625 "trtype": "TCP", 00:19:00.625 "adrfam": "IPv4", 00:19:00.625 "traddr": "10.0.0.2", 00:19:00.625 "trsvcid": "4420" 00:19:00.625 }, 00:19:00.625 "peer_address": { 00:19:00.625 "trtype": "TCP", 00:19:00.625 "adrfam": "IPv4", 00:19:00.625 "traddr": "10.0.0.1", 00:19:00.625 "trsvcid": "60724" 00:19:00.625 }, 00:19:00.625 "auth": { 00:19:00.625 "state": "completed", 00:19:00.625 "digest": "sha512", 00:19:00.625 "dhgroup": "ffdhe4096" 00:19:00.625 } 00:19:00.625 } 00:19:00.625 ]' 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.625 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.887 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:19:01.828 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.828 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.828 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.828 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.829 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.090 00:19:02.090 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.090 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.090 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.351 { 00:19:02.351 "cntlid": 123, 00:19:02.351 "qid": 0, 00:19:02.351 "state": "enabled", 00:19:02.351 "thread": "nvmf_tgt_poll_group_000", 00:19:02.351 "listen_address": { 00:19:02.351 "trtype": "TCP", 00:19:02.351 "adrfam": "IPv4", 00:19:02.351 "traddr": "10.0.0.2", 00:19:02.351 "trsvcid": "4420" 00:19:02.351 }, 00:19:02.351 "peer_address": { 00:19:02.351 "trtype": "TCP", 00:19:02.351 "adrfam": "IPv4", 00:19:02.351 "traddr": "10.0.0.1", 00:19:02.351 "trsvcid": "49288" 00:19:02.351 }, 00:19:02.351 "auth": { 00:19:02.351 "state": "completed", 00:19:02.351 "digest": "sha512", 00:19:02.351 "dhgroup": "ffdhe4096" 00:19:02.351 } 00:19:02.351 } 00:19:02.351 ]' 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.351 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.352 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.634 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:19:03.208 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.208 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.208 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.208 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.208 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.208 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.208 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:03.208 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.469 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.731 00:19:03.731 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.731 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.731 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.993 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.993 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.993 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.993 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.993 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.993 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.993 { 00:19:03.993 "cntlid": 125, 00:19:03.993 "qid": 0, 00:19:03.993 "state": "enabled", 00:19:03.993 "thread": "nvmf_tgt_poll_group_000", 00:19:03.993 "listen_address": { 00:19:03.993 "trtype": "TCP", 00:19:03.993 "adrfam": "IPv4", 00:19:03.993 "traddr": "10.0.0.2", 00:19:03.993 "trsvcid": "4420" 00:19:03.993 }, 00:19:03.993 "peer_address": { 00:19:03.993 "trtype": "TCP", 00:19:03.993 "adrfam": "IPv4", 00:19:03.993 "traddr": "10.0.0.1", 00:19:03.993 "trsvcid": "49310" 00:19:03.993 }, 00:19:03.993 "auth": { 00:19:03.993 "state": "completed", 00:19:03.993 "digest": "sha512", 00:19:03.993 "dhgroup": "ffdhe4096" 00:19:03.993 } 00:19:03.993 } 00:19:03.993 ]' 00:19:03.993 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.993 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.993 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.993 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.993 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.993 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.993 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.993 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.254 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:19:04.828 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.828 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.828 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.828 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.828 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.828 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.828 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.828 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.089 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.349 00:19:05.349 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.350 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.350 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.610 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.610 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.610 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.610 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.610 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.610 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.610 { 00:19:05.610 "cntlid": 127, 00:19:05.610 "qid": 0, 00:19:05.610 "state": "enabled", 00:19:05.610 "thread": "nvmf_tgt_poll_group_000", 00:19:05.610 "listen_address": { 00:19:05.610 "trtype": "TCP", 00:19:05.610 "adrfam": "IPv4", 00:19:05.610 "traddr": "10.0.0.2", 00:19:05.610 "trsvcid": "4420" 00:19:05.610 }, 00:19:05.610 "peer_address": { 00:19:05.610 "trtype": "TCP", 00:19:05.610 "adrfam": "IPv4", 00:19:05.610 "traddr": "10.0.0.1", 00:19:05.610 "trsvcid": "49348" 00:19:05.610 }, 00:19:05.610 "auth": { 00:19:05.610 "state": "completed", 00:19:05.610 "digest": "sha512", 00:19:05.611 "dhgroup": "ffdhe4096" 00:19:05.611 } 00:19:05.611 } 00:19:05.611 ]' 00:19:05.611 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.611 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.611 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.611 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.611 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.611 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.611 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.611 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.872 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.814 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.074 00:19:07.074 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.074 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.074 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.334 { 00:19:07.334 "cntlid": 129, 00:19:07.334 "qid": 0, 00:19:07.334 "state": "enabled", 00:19:07.334 "thread": "nvmf_tgt_poll_group_000", 00:19:07.334 "listen_address": { 00:19:07.334 "trtype": "TCP", 00:19:07.334 "adrfam": "IPv4", 00:19:07.334 "traddr": "10.0.0.2", 00:19:07.334 "trsvcid": "4420" 00:19:07.334 }, 00:19:07.334 "peer_address": { 00:19:07.334 "trtype": "TCP", 00:19:07.334 "adrfam": "IPv4", 00:19:07.334 "traddr": "10.0.0.1", 00:19:07.334 "trsvcid": "49372" 00:19:07.334 }, 00:19:07.334 "auth": { 00:19:07.334 "state": "completed", 00:19:07.334 "digest": "sha512", 00:19:07.334 "dhgroup": "ffdhe6144" 00:19:07.334 } 00:19:07.334 } 00:19:07.334 ]' 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:07.334 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.595 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.595 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.595 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.595 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.538 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.112 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.112 { 00:19:09.112 "cntlid": 131, 00:19:09.112 "qid": 0, 00:19:09.112 "state": "enabled", 00:19:09.112 "thread": "nvmf_tgt_poll_group_000", 00:19:09.112 "listen_address": { 00:19:09.112 "trtype": "TCP", 00:19:09.112 "adrfam": "IPv4", 00:19:09.112 "traddr": "10.0.0.2", 00:19:09.112 "trsvcid": "4420" 00:19:09.112 }, 00:19:09.112 "peer_address": { 00:19:09.112 "trtype": "TCP", 00:19:09.112 "adrfam": "IPv4", 00:19:09.112 "traddr": "10.0.0.1", 00:19:09.112 "trsvcid": "49400" 00:19:09.112 }, 00:19:09.112 "auth": { 00:19:09.112 "state": "completed", 00:19:09.112 "digest": "sha512", 00:19:09.112 "dhgroup": "ffdhe6144" 00:19:09.112 } 00:19:09.112 } 00:19:09.112 ]' 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:09.112 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.373 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.373 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.373 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.373 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.316 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.576 00:19:10.576 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.576 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.577 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.837 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.837 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.837 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.837 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.837 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.837 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.837 { 00:19:10.837 "cntlid": 133, 00:19:10.837 "qid": 0, 00:19:10.837 "state": "enabled", 00:19:10.837 "thread": "nvmf_tgt_poll_group_000", 00:19:10.837 "listen_address": { 00:19:10.837 "trtype": "TCP", 00:19:10.837 "adrfam": "IPv4", 00:19:10.837 "traddr": "10.0.0.2", 00:19:10.837 "trsvcid": "4420" 00:19:10.837 }, 00:19:10.837 "peer_address": { 00:19:10.837 "trtype": "TCP", 00:19:10.837 "adrfam": "IPv4", 00:19:10.837 "traddr": "10.0.0.1", 00:19:10.837 "trsvcid": "49426" 00:19:10.837 }, 00:19:10.837 "auth": { 00:19:10.837 "state": "completed", 00:19:10.837 "digest": "sha512", 00:19:10.837 "dhgroup": "ffdhe6144" 00:19:10.837 } 00:19:10.837 } 00:19:10.837 ]' 00:19:10.837 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.837 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.837 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.837 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.837 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.098 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.098 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.098 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.098 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:19:12.049 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.049 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.049 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.049 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.049 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.049 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.049 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.049 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.049 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.310 00:19:12.310 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.310 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.310 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.571 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.571 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.571 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.571 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.571 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.571 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.571 { 00:19:12.571 "cntlid": 135, 00:19:12.571 "qid": 0, 00:19:12.571 "state": "enabled", 00:19:12.571 "thread": "nvmf_tgt_poll_group_000", 00:19:12.571 "listen_address": { 00:19:12.571 "trtype": "TCP", 00:19:12.571 "adrfam": "IPv4", 00:19:12.571 "traddr": "10.0.0.2", 00:19:12.571 "trsvcid": "4420" 00:19:12.571 }, 00:19:12.571 "peer_address": { 00:19:12.571 "trtype": "TCP", 00:19:12.571 "adrfam": "IPv4", 00:19:12.571 "traddr": "10.0.0.1", 00:19:12.571 "trsvcid": "34612" 00:19:12.571 }, 00:19:12.571 "auth": { 00:19:12.571 "state": "completed", 00:19:12.571 "digest": "sha512", 00:19:12.571 "dhgroup": "ffdhe6144" 00:19:12.571 } 00:19:12.571 } 00:19:12.571 ]' 00:19:12.571 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.572 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.572 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.572 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:12.572 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.832 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.832 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.832 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.832 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.775 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.346 00:19:14.346 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.346 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.346 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.607 { 00:19:14.607 "cntlid": 137, 00:19:14.607 "qid": 0, 00:19:14.607 "state": "enabled", 00:19:14.607 "thread": "nvmf_tgt_poll_group_000", 00:19:14.607 "listen_address": { 00:19:14.607 "trtype": "TCP", 00:19:14.607 "adrfam": "IPv4", 00:19:14.607 "traddr": "10.0.0.2", 00:19:14.607 "trsvcid": "4420" 00:19:14.607 }, 00:19:14.607 "peer_address": { 00:19:14.607 "trtype": "TCP", 00:19:14.607 "adrfam": "IPv4", 00:19:14.607 "traddr": "10.0.0.1", 00:19:14.607 "trsvcid": "34630" 00:19:14.607 }, 00:19:14.607 "auth": { 00:19:14.607 "state": "completed", 00:19:14.607 "digest": "sha512", 00:19:14.607 "dhgroup": "ffdhe8192" 00:19:14.607 } 00:19:14.607 } 00:19:14.607 ]' 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.607 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.868 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:19:15.440 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.440 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.440 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.440 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.440 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.440 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.440 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.440 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.771 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.342 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.342 { 00:19:16.342 "cntlid": 139, 00:19:16.342 "qid": 0, 00:19:16.342 "state": "enabled", 00:19:16.342 "thread": "nvmf_tgt_poll_group_000", 00:19:16.342 "listen_address": { 00:19:16.342 "trtype": "TCP", 00:19:16.342 "adrfam": "IPv4", 00:19:16.342 "traddr": "10.0.0.2", 00:19:16.342 "trsvcid": "4420" 00:19:16.342 }, 00:19:16.342 "peer_address": { 00:19:16.342 "trtype": "TCP", 00:19:16.342 "adrfam": "IPv4", 00:19:16.342 "traddr": "10.0.0.1", 00:19:16.342 "trsvcid": "34642" 00:19:16.342 }, 00:19:16.342 "auth": { 00:19:16.342 "state": "completed", 00:19:16.342 "digest": "sha512", 00:19:16.342 "dhgroup": "ffdhe8192" 00:19:16.342 } 00:19:16.342 } 00:19:16.342 ]' 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.342 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.603 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.603 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.603 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.603 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDJlZDk0ZWQxOWYxYjMzNzdkODhjZDAxY2E5YWM0MmNmF230: --dhchap-ctrl-secret DHHC-1:02:ZGJhNTFkNjMzOTE4ZWRhNTk4MTUxODE4NmRmOGE4NjNiNWZhMWQ0YmZkZmU0ODI24u6JVg==: 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.545 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.116 00:19:18.116 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.116 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.116 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.377 { 00:19:18.377 "cntlid": 141, 00:19:18.377 "qid": 0, 00:19:18.377 "state": "enabled", 00:19:18.377 "thread": "nvmf_tgt_poll_group_000", 00:19:18.377 "listen_address": { 00:19:18.377 "trtype": "TCP", 00:19:18.377 "adrfam": "IPv4", 00:19:18.377 "traddr": "10.0.0.2", 00:19:18.377 "trsvcid": "4420" 00:19:18.377 }, 00:19:18.377 "peer_address": { 00:19:18.377 "trtype": "TCP", 00:19:18.377 "adrfam": "IPv4", 00:19:18.377 "traddr": "10.0.0.1", 00:19:18.377 "trsvcid": "34654" 00:19:18.377 }, 00:19:18.377 "auth": { 00:19:18.377 "state": "completed", 00:19:18.377 "digest": "sha512", 00:19:18.377 "dhgroup": "ffdhe8192" 00:19:18.377 } 00:19:18.377 } 00:19:18.377 ]' 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.377 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.638 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NzVmMjU2NWM1OWIzYWNmMjEyZWE2NDg5NjdjMmIyY2U4MmMwNTJkMzFkYzMwOWFkdr2ubA==: --dhchap-ctrl-secret DHHC-1:01:YzRjYzkyODM5NDUxMDMzZTY0MTUwNmJhYjRjN2EyODYd1nJh: 00:19:19.208 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.208 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.208 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.208 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.208 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.208 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.208 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.208 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.469 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.040 00:19:20.040 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.040 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.040 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.040 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.040 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.040 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.040 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.301 { 00:19:20.301 "cntlid": 143, 00:19:20.301 "qid": 0, 00:19:20.301 "state": "enabled", 00:19:20.301 "thread": "nvmf_tgt_poll_group_000", 00:19:20.301 "listen_address": { 00:19:20.301 "trtype": "TCP", 00:19:20.301 "adrfam": "IPv4", 00:19:20.301 "traddr": "10.0.0.2", 00:19:20.301 "trsvcid": "4420" 00:19:20.301 }, 00:19:20.301 "peer_address": { 00:19:20.301 "trtype": "TCP", 00:19:20.301 "adrfam": "IPv4", 00:19:20.301 "traddr": "10.0.0.1", 00:19:20.301 "trsvcid": "34672" 00:19:20.301 }, 00:19:20.301 "auth": { 00:19:20.301 "state": "completed", 00:19:20.301 "digest": "sha512", 00:19:20.301 "dhgroup": "ffdhe8192" 00:19:20.301 } 00:19:20.301 } 00:19:20.301 ]' 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.301 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.562 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.158 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.419 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.992 00:19:21.992 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.992 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.992 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.992 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.992 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.992 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.992 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.992 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.992 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.992 { 00:19:21.992 "cntlid": 145, 00:19:21.992 "qid": 0, 00:19:21.992 "state": "enabled", 00:19:21.992 "thread": "nvmf_tgt_poll_group_000", 00:19:21.992 "listen_address": { 00:19:21.992 "trtype": "TCP", 00:19:21.992 "adrfam": "IPv4", 00:19:21.992 "traddr": "10.0.0.2", 00:19:21.992 "trsvcid": "4420" 00:19:21.992 }, 00:19:21.992 "peer_address": { 00:19:21.992 "trtype": "TCP", 00:19:21.992 "adrfam": "IPv4", 00:19:21.992 "traddr": "10.0.0.1", 00:19:21.992 "trsvcid": "34692" 00:19:21.992 }, 00:19:21.992 "auth": { 00:19:21.992 "state": "completed", 00:19:21.992 "digest": "sha512", 00:19:21.992 "dhgroup": "ffdhe8192" 00:19:21.992 } 00:19:21.992 } 00:19:21.992 ]' 00:19:21.992 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.253 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.253 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.253 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.253 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.253 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.253 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.253 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.514 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTlhYmQ5MjI4OWIxMzFkZTVmNzQ5ZWZlYjYzNTIzOTViZGRmNWNjYTVmMzAyY2MwDN7QAA==: --dhchap-ctrl-secret DHHC-1:03:OTNlN2ZmNGIzNGRmOTIzNzc1ZGVjZTFiOTZjMzE1NmQ4YWM4YzQxOWVjNzM4ZTU2MzVhZGIwMmNjMTZlMmUxMSm6T2Y=: 00:19:23.086 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.086 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.086 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.086 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.086 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.086 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:23.086 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:23.087 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:23.659 request: 00:19:23.659 { 00:19:23.659 "name": "nvme0", 00:19:23.659 "trtype": "tcp", 00:19:23.659 "traddr": "10.0.0.2", 00:19:23.659 "adrfam": "ipv4", 00:19:23.659 "trsvcid": "4420", 00:19:23.659 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.659 "prchk_reftag": false, 00:19:23.659 "prchk_guard": false, 00:19:23.659 "hdgst": false, 00:19:23.659 "ddgst": false, 00:19:23.659 "dhchap_key": "key2", 00:19:23.659 "method": "bdev_nvme_attach_controller", 00:19:23.659 "req_id": 1 00:19:23.659 } 00:19:23.659 Got JSON-RPC error response 00:19:23.659 response: 00:19:23.659 { 00:19:23.659 "code": -5, 00:19:23.659 "message": "Input/output error" 00:19:23.659 } 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.659 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.230 request: 00:19:24.230 { 00:19:24.230 "name": "nvme0", 00:19:24.230 "trtype": "tcp", 00:19:24.230 "traddr": "10.0.0.2", 00:19:24.230 "adrfam": "ipv4", 00:19:24.230 "trsvcid": "4420", 00:19:24.230 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.230 "prchk_reftag": false, 00:19:24.230 "prchk_guard": false, 00:19:24.230 "hdgst": false, 00:19:24.230 "ddgst": false, 00:19:24.230 "dhchap_key": "key1", 00:19:24.230 "dhchap_ctrlr_key": "ckey2", 00:19:24.230 "method": "bdev_nvme_attach_controller", 00:19:24.230 "req_id": 1 00:19:24.230 } 00:19:24.230 Got JSON-RPC error response 00:19:24.230 response: 00:19:24.230 { 00:19:24.230 "code": -5, 00:19:24.230 "message": "Input/output error" 00:19:24.230 } 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.230 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.802 request: 00:19:24.802 { 00:19:24.802 "name": "nvme0", 00:19:24.802 "trtype": "tcp", 00:19:24.802 "traddr": "10.0.0.2", 00:19:24.802 "adrfam": "ipv4", 00:19:24.802 "trsvcid": "4420", 00:19:24.803 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.803 "prchk_reftag": false, 00:19:24.803 "prchk_guard": false, 00:19:24.803 "hdgst": false, 00:19:24.803 "ddgst": false, 00:19:24.803 "dhchap_key": "key1", 00:19:24.803 "dhchap_ctrlr_key": "ckey1", 00:19:24.803 "method": "bdev_nvme_attach_controller", 00:19:24.803 "req_id": 1 00:19:24.803 } 00:19:24.803 Got JSON-RPC error response 00:19:24.803 response: 00:19:24.803 { 00:19:24.803 "code": -5, 00:19:24.803 "message": "Input/output error" 00:19:24.803 } 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 240038 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 240038 ']' 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 240038 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 240038 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 240038' 00:19:24.803 killing process with pid 240038 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 240038 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 240038 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=266996 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 266996 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 266996 ']' 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.803 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 266996 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 266996 ']' 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.747 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.008 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.008 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:26.008 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:26.009 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.009 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.009 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.581 00:19:26.581 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.581 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.581 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.581 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.581 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.581 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.581 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.842 { 00:19:26.842 "cntlid": 1, 00:19:26.842 "qid": 0, 00:19:26.842 "state": "enabled", 00:19:26.842 "thread": "nvmf_tgt_poll_group_000", 00:19:26.842 "listen_address": { 00:19:26.842 "trtype": "TCP", 00:19:26.842 "adrfam": "IPv4", 00:19:26.842 "traddr": "10.0.0.2", 00:19:26.842 "trsvcid": "4420" 00:19:26.842 }, 00:19:26.842 "peer_address": { 00:19:26.842 "trtype": "TCP", 00:19:26.842 "adrfam": "IPv4", 00:19:26.842 "traddr": "10.0.0.1", 00:19:26.842 "trsvcid": "45908" 00:19:26.842 }, 00:19:26.842 "auth": { 00:19:26.842 "state": "completed", 00:19:26.842 "digest": "sha512", 00:19:26.842 "dhgroup": "ffdhe8192" 00:19:26.842 } 00:19:26.842 } 00:19:26.842 ]' 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.842 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.102 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:N2QyYjhjM2NkOWRlMjIwMWFhYjE2YTI4YjVjYTQ1MmIwNGE0ZTNjMzY2ZjA3ZDgwN2JlYWIxODVlZjAzNWEyNwauYnc=: 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:27.673 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:27.933 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.933 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:27.933 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.933 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:27.933 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.933 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:27.933 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.933 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.933 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.195 request: 00:19:28.195 { 00:19:28.195 "name": "nvme0", 00:19:28.195 "trtype": "tcp", 00:19:28.195 "traddr": "10.0.0.2", 00:19:28.195 "adrfam": "ipv4", 00:19:28.195 "trsvcid": "4420", 00:19:28.195 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:28.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.195 "prchk_reftag": false, 00:19:28.195 "prchk_guard": false, 00:19:28.195 "hdgst": false, 00:19:28.195 "ddgst": false, 00:19:28.195 "dhchap_key": "key3", 00:19:28.195 "method": "bdev_nvme_attach_controller", 00:19:28.195 "req_id": 1 00:19:28.195 } 00:19:28.195 Got JSON-RPC error response 00:19:28.195 response: 00:19:28.195 { 00:19:28.195 "code": -5, 00:19:28.195 "message": "Input/output error" 00:19:28.195 } 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.195 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.456 request: 00:19:28.456 { 00:19:28.456 "name": "nvme0", 00:19:28.456 "trtype": "tcp", 00:19:28.456 "traddr": "10.0.0.2", 00:19:28.456 "adrfam": "ipv4", 00:19:28.456 "trsvcid": "4420", 00:19:28.456 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:28.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.456 "prchk_reftag": false, 00:19:28.456 "prchk_guard": false, 00:19:28.456 "hdgst": false, 00:19:28.456 "ddgst": false, 00:19:28.456 "dhchap_key": "key3", 00:19:28.456 "method": "bdev_nvme_attach_controller", 00:19:28.456 "req_id": 1 00:19:28.456 } 00:19:28.456 Got JSON-RPC error response 00:19:28.456 response: 00:19:28.456 { 00:19:28.456 "code": -5, 00:19:28.456 "message": "Input/output error" 00:19:28.456 } 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:28.456 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.718 request: 00:19:28.718 { 00:19:28.718 "name": "nvme0", 00:19:28.718 "trtype": "tcp", 00:19:28.718 "traddr": "10.0.0.2", 00:19:28.718 "adrfam": "ipv4", 00:19:28.718 "trsvcid": "4420", 00:19:28.718 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:28.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.718 "prchk_reftag": false, 00:19:28.718 "prchk_guard": false, 00:19:28.718 "hdgst": false, 00:19:28.718 "ddgst": false, 00:19:28.718 "dhchap_key": "key0", 00:19:28.718 "dhchap_ctrlr_key": "key1", 00:19:28.718 "method": "bdev_nvme_attach_controller", 00:19:28.718 "req_id": 1 00:19:28.718 } 00:19:28.718 Got JSON-RPC error response 00:19:28.718 response: 00:19:28.718 { 00:19:28.718 "code": -5, 00:19:28.718 "message": "Input/output error" 00:19:28.718 } 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:28.718 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:28.979 00:19:28.979 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:28.979 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.979 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 240379 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 240379 ']' 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 240379 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.240 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 240379 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 240379' 00:19:29.501 killing process with pid 240379 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 240379 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 240379 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.501 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.501 rmmod nvme_tcp 00:19:29.762 rmmod nvme_fabrics 00:19:29.762 rmmod nvme_keyring 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 266996 ']' 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 266996 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 266996 ']' 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 266996 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 266996 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 266996' 00:19:29.762 killing process with pid 266996 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 266996 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 266996 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.762 15:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.309 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:32.309 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UEd /tmp/spdk.key-sha256.8od /tmp/spdk.key-sha384.Tqb /tmp/spdk.key-sha512.wld /tmp/spdk.key-sha512.EGb /tmp/spdk.key-sha384.PlJ /tmp/spdk.key-sha256.I0F '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:32.309 00:19:32.309 real 2m24.307s 00:19:32.309 user 5m21.355s 00:19:32.309 sys 0m21.310s 00:19:32.309 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.309 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 ************************************ 00:19:32.309 END TEST nvmf_auth_target 00:19:32.309 ************************************ 00:19:32.309 15:15:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:32.309 15:15:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:32.309 15:15:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:32.309 15:15:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.309 15:15:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.310 ************************************ 00:19:32.310 START TEST nvmf_bdevio_no_huge 00:19:32.310 ************************************ 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:32.310 * Looking for test storage... 00:19:32.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:32.310 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:38.941 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.941 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:38.941 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:38.941 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:38.941 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:38.942 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:38.942 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:38.942 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:38.942 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.942 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:39.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:19:39.204 00:19:39.204 --- 10.0.0.2 ping statistics --- 00:19:39.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.204 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:19:39.204 00:19:39.204 --- 10.0.0.1 ping statistics --- 00:19:39.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.204 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.204 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:39.205 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:39.467 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=272049 00:19:39.467 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 272049 00:19:39.467 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:39.467 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 272049 ']' 00:19:39.467 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.467 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.467 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.467 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.467 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:39.467 [2024-07-25 15:15:31.450122] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:39.467 [2024-07-25 15:15:31.450189] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:39.467 [2024-07-25 15:15:31.545264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:39.467 [2024-07-25 15:15:31.653268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.467 [2024-07-25 15:15:31.653323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.467 [2024-07-25 15:15:31.653332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.467 [2024-07-25 15:15:31.653339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.467 [2024-07-25 15:15:31.653345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.467 [2024-07-25 15:15:31.653513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:39.467 [2024-07-25 15:15:31.653657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:39.467 [2024-07-25 15:15:31.653817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.467 [2024-07-25 15:15:31.653818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.412 [2024-07-25 15:15:32.288873] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.412 Malloc0 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.412 [2024-07-25 15:15:32.342016] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.412 { 00:19:40.412 "params": { 00:19:40.412 "name": "Nvme$subsystem", 00:19:40.412 "trtype": "$TEST_TRANSPORT", 00:19:40.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.412 "adrfam": "ipv4", 00:19:40.412 "trsvcid": "$NVMF_PORT", 00:19:40.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.412 "hdgst": ${hdgst:-false}, 00:19:40.412 "ddgst": ${ddgst:-false} 00:19:40.412 }, 00:19:40.412 "method": "bdev_nvme_attach_controller" 00:19:40.412 } 00:19:40.412 EOF 00:19:40.412 )") 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:40.412 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:40.412 "params": { 00:19:40.412 "name": "Nvme1", 00:19:40.412 "trtype": "tcp", 00:19:40.412 "traddr": "10.0.0.2", 00:19:40.412 "adrfam": "ipv4", 00:19:40.412 "trsvcid": "4420", 00:19:40.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.412 "hdgst": false, 00:19:40.412 "ddgst": false 00:19:40.412 }, 00:19:40.412 "method": "bdev_nvme_attach_controller" 00:19:40.412 }' 00:19:40.412 [2024-07-25 15:15:32.396685] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:40.412 [2024-07-25 15:15:32.396756] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid272396 ] 00:19:40.412 [2024-07-25 15:15:32.466951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:40.412 [2024-07-25 15:15:32.563541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.412 [2024-07-25 15:15:32.563664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.412 [2024-07-25 15:15:32.563667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.673 I/O targets: 00:19:40.673 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:40.673 00:19:40.673 00:19:40.673 CUnit - A unit testing framework for C - Version 2.1-3 00:19:40.673 http://cunit.sourceforge.net/ 00:19:40.673 00:19:40.673 00:19:40.673 Suite: bdevio tests on: Nvme1n1 00:19:40.673 Test: blockdev write read block ...passed 00:19:40.673 Test: blockdev write zeroes read block ...passed 00:19:40.673 Test: blockdev write zeroes read no split ...passed 00:19:40.935 Test: blockdev write zeroes read split ...passed 00:19:40.935 Test: blockdev write zeroes read split partial ...passed 00:19:40.935 Test: blockdev reset ...[2024-07-25 15:15:32.959514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.935 [2024-07-25 15:15:32.959572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203cc10 (9): Bad file descriptor 00:19:40.935 [2024-07-25 15:15:33.012045] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:40.935 passed 00:19:40.935 Test: blockdev write read 8 blocks ...passed 00:19:40.935 Test: blockdev write read size > 128k ...passed 00:19:40.935 Test: blockdev write read invalid size ...passed 00:19:40.935 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:40.935 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:40.935 Test: blockdev write read max offset ...passed 00:19:41.196 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:41.196 Test: blockdev writev readv 8 blocks ...passed 00:19:41.196 Test: blockdev writev readv 30 x 1block ...passed 00:19:41.196 Test: blockdev writev readv block ...passed 00:19:41.196 Test: blockdev writev readv size > 128k ...passed 00:19:41.196 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:41.196 Test: blockdev comparev and writev ...[2024-07-25 15:15:33.246144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.196 [2024-07-25 15:15:33.246169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:41.196 [2024-07-25 15:15:33.246179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.196 [2024-07-25 15:15:33.246185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:41.196 [2024-07-25 15:15:33.246832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.196 [2024-07-25 15:15:33.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:41.196 [2024-07-25 15:15:33.246851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.196 [2024-07-25 15:15:33.246856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:41.196 [2024-07-25 15:15:33.247471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.197 [2024-07-25 15:15:33.247479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:41.197 [2024-07-25 15:15:33.247489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.197 [2024-07-25 15:15:33.247494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:41.197 [2024-07-25 15:15:33.248109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.197 [2024-07-25 15:15:33.248121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:41.197 [2024-07-25 15:15:33.248130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.197 [2024-07-25 15:15:33.248135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:41.197 passed 00:19:41.197 Test: blockdev nvme passthru rw ...passed 00:19:41.197 Test: blockdev nvme passthru vendor specific ...[2024-07-25 15:15:33.333350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.197 [2024-07-25 15:15:33.333362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:41.197 [2024-07-25 15:15:33.333854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.197 [2024-07-25 15:15:33.333862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:41.197 [2024-07-25 15:15:33.334320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.197 [2024-07-25 15:15:33.334328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:41.197 [2024-07-25 15:15:33.334808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.197 [2024-07-25 15:15:33.334816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:41.197 passed 00:19:41.197 Test: blockdev nvme admin passthru ...passed 00:19:41.457 Test: blockdev copy ...passed 00:19:41.457 00:19:41.457 Run Summary: Type Total Ran Passed Failed Inactive 00:19:41.457 suites 1 1 n/a 0 0 00:19:41.457 tests 23 23 23 0 0 00:19:41.457 asserts 152 152 152 0 n/a 00:19:41.457 00:19:41.457 Elapsed time = 1.291 seconds 00:19:41.718 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.718 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.719 rmmod nvme_tcp 00:19:41.719 rmmod nvme_fabrics 00:19:41.719 rmmod nvme_keyring 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 272049 ']' 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 272049 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 272049 ']' 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 272049 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 272049 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 272049' 00:19:41.719 killing process with pid 272049 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 272049 00:19:41.719 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 272049 00:19:41.980 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:41.980 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:41.980 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:41.980 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.980 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.980 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.980 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.980 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:44.529 00:19:44.529 real 0m12.090s 00:19:44.529 user 0m13.738s 00:19:44.529 sys 0m6.313s 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:44.529 ************************************ 00:19:44.529 END TEST nvmf_bdevio_no_huge 00:19:44.529 ************************************ 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.529 ************************************ 00:19:44.529 START TEST nvmf_tls 00:19:44.529 ************************************ 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:44.529 * Looking for test storage... 00:19:44.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:44.529 15:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:51.125 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:51.125 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:51.125 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:51.125 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.125 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.126 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.126 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.126 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.126 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:19:51.387 00:19:51.387 --- 10.0.0.2 ping statistics --- 00:19:51.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.387 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:19:51.387 00:19:51.387 --- 10.0.0.1 ping statistics --- 00:19:51.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.387 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=276728 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 276728 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 276728 ']' 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.387 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.648 [2024-07-25 15:15:43.595195] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:51.648 [2024-07-25 15:15:43.595273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.648 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.648 [2024-07-25 15:15:43.681597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.648 [2024-07-25 15:15:43.745025] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.648 [2024-07-25 15:15:43.745064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.648 [2024-07-25 15:15:43.745071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.648 [2024-07-25 15:15:43.745078] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.648 [2024-07-25 15:15:43.745083] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.648 [2024-07-25 15:15:43.745104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.222 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.222 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:52.222 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.222 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.222 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.484 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.484 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:52.484 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:52.484 true 00:19:52.484 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:52.484 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:52.745 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:52.745 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:52.745 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:53.005 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.005 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:53.005 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:53.005 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:53.005 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:53.266 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.266 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:53.526 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:53.526 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:53.526 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.526 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:53.526 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:53.526 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:53.526 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:53.787 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.787 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:54.047 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:54.047 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:54.048 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:54.048 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.048 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:54.308 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:54.308 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:54.308 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:54.308 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:54.308 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.1xLjDCxZB2 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.GNXv84ST0t 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.1xLjDCxZB2 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GNXv84ST0t 00:19:54.309 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:54.571 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:54.831 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.1xLjDCxZB2 00:19:54.831 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1xLjDCxZB2 00:19:54.831 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:54.831 [2024-07-25 15:15:46.941182] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.831 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:55.091 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:55.091 [2024-07-25 15:15:47.245932] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.091 [2024-07-25 15:15:47.246131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.091 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:55.352 malloc0 00:19:55.352 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.612 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1xLjDCxZB2 00:19:55.612 [2024-07-25 15:15:47.709015] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:55.612 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1xLjDCxZB2 00:19:55.612 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.878 Initializing NVMe Controllers 00:20:07.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:07.878 Initialization complete. Launching workers. 00:20:07.878 ======================================================== 00:20:07.878 Latency(us) 00:20:07.878 Device Information : IOPS MiB/s Average min max 00:20:07.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19007.65 74.25 3367.08 1118.57 5450.89 00:20:07.878 ======================================================== 00:20:07.878 Total : 19007.65 74.25 3367.08 1118.57 5450.89 00:20:07.878 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1xLjDCxZB2 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1xLjDCxZB2' 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=279480 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 279480 /var/tmp/bdevperf.sock 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 279480 ']' 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.878 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.878 [2024-07-25 15:15:57.897204] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:07.878 [2024-07-25 15:15:57.897261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279480 ] 00:20:07.878 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.878 [2024-07-25 15:15:57.946658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.878 [2024-07-25 15:15:57.999712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.878 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.878 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:07.878 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1xLjDCxZB2 00:20:07.878 [2024-07-25 15:15:58.796427] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.878 [2024-07-25 15:15:58.796479] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:07.878 TLSTESTn1 00:20:07.878 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:07.878 Running I/O for 10 seconds... 00:20:17.884 00:20:17.884 Latency(us) 00:20:17.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.884 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:17.884 Verification LBA range: start 0x0 length 0x2000 00:20:17.884 TLSTESTn1 : 10.07 2043.31 7.98 0.00 0.00 62437.29 6116.69 139810.13 00:20:17.884 =================================================================================================================== 00:20:17.884 Total : 2043.31 7.98 0.00 0.00 62437.29 6116.69 139810.13 00:20:17.884 0 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 279480 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 279480 ']' 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 279480 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279480 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279480' 00:20:17.884 killing process with pid 279480 00:20:17.884 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 279480 00:20:17.885 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.885 00:20:17.885 Latency(us) 00:20:17.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.885 =================================================================================================================== 00:20:17.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.885 [2024-07-25 15:16:09.168445] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 279480 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GNXv84ST0t 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GNXv84ST0t 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GNXv84ST0t 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GNXv84ST0t' 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=281828 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 281828 /var/tmp/bdevperf.sock 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 281828 ']' 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.885 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.885 [2024-07-25 15:16:09.341760] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:17.885 [2024-07-25 15:16:09.341820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281828 ] 00:20:17.885 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.885 [2024-07-25 15:16:09.390713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.885 [2024-07-25 15:16:09.442887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.146 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:18.146 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:18.146 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GNXv84ST0t 00:20:18.146 [2024-07-25 15:16:10.255534] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.146 [2024-07-25 15:16:10.255591] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:18.147 [2024-07-25 15:16:10.260066] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:18.147 [2024-07-25 15:16:10.260691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96dec0 (107): Transport endpoint is not connected 00:20:18.147 [2024-07-25 15:16:10.261685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96dec0 (9): Bad file descriptor 00:20:18.147 [2024-07-25 15:16:10.262687] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.147 [2024-07-25 15:16:10.262696] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:18.147 [2024-07-25 15:16:10.262703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.147 request: 00:20:18.147 { 00:20:18.147 "name": "TLSTEST", 00:20:18.147 "trtype": "tcp", 00:20:18.147 "traddr": "10.0.0.2", 00:20:18.147 "adrfam": "ipv4", 00:20:18.147 "trsvcid": "4420", 00:20:18.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.147 "prchk_reftag": false, 00:20:18.147 "prchk_guard": false, 00:20:18.147 "hdgst": false, 00:20:18.147 "ddgst": false, 00:20:18.147 "psk": "/tmp/tmp.GNXv84ST0t", 00:20:18.147 "method": "bdev_nvme_attach_controller", 00:20:18.147 "req_id": 1 00:20:18.147 } 00:20:18.147 Got JSON-RPC error response 00:20:18.147 response: 00:20:18.147 { 00:20:18.147 "code": -5, 00:20:18.147 "message": "Input/output error" 00:20:18.147 } 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 281828 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 281828 ']' 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 281828 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 281828 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 281828' 00:20:18.147 killing process with pid 281828 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 281828 00:20:18.147 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.147 00:20:18.147 Latency(us) 00:20:18.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.147 =================================================================================================================== 00:20:18.147 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.147 [2024-07-25 15:16:10.331401] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:18.147 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 281828 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1xLjDCxZB2 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1xLjDCxZB2 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1xLjDCxZB2 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1xLjDCxZB2' 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=282012 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 282012 /var/tmp/bdevperf.sock 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 282012 ']' 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:18.408 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.408 [2024-07-25 15:16:10.490376] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:18.408 [2024-07-25 15:16:10.490432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282012 ] 00:20:18.408 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.408 [2024-07-25 15:16:10.539839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.409 [2024-07-25 15:16:10.592292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.351 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.1xLjDCxZB2 00:20:19.352 [2024-07-25 15:16:11.388960] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.352 [2024-07-25 15:16:11.389023] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:19.352 [2024-07-25 15:16:11.400721] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:19.352 [2024-07-25 15:16:11.400740] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:19.352 [2024-07-25 15:16:11.400759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:19.352 [2024-07-25 15:16:11.401189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acec0 (107): Transport endpoint is not connected 00:20:19.352 [2024-07-25 15:16:11.402183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acec0 (9): Bad file descriptor 00:20:19.352 [2024-07-25 15:16:11.403185] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.352 [2024-07-25 15:16:11.403193] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:19.352 [2024-07-25 15:16:11.403203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.352 request: 00:20:19.352 { 00:20:19.352 "name": "TLSTEST", 00:20:19.352 "trtype": "tcp", 00:20:19.352 "traddr": "10.0.0.2", 00:20:19.352 "adrfam": "ipv4", 00:20:19.352 "trsvcid": "4420", 00:20:19.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.352 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:19.352 "prchk_reftag": false, 00:20:19.352 "prchk_guard": false, 00:20:19.352 "hdgst": false, 00:20:19.352 "ddgst": false, 00:20:19.352 "psk": "/tmp/tmp.1xLjDCxZB2", 00:20:19.352 "method": "bdev_nvme_attach_controller", 00:20:19.352 "req_id": 1 00:20:19.352 } 00:20:19.352 Got JSON-RPC error response 00:20:19.352 response: 00:20:19.352 { 00:20:19.352 "code": -5, 00:20:19.352 "message": "Input/output error" 00:20:19.352 } 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 282012 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 282012 ']' 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 282012 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282012 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282012' 00:20:19.352 killing process with pid 282012 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 282012 00:20:19.352 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.352 00:20:19.352 Latency(us) 00:20:19.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.352 =================================================================================================================== 00:20:19.352 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:19.352 [2024-07-25 15:16:11.473859] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:19.352 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 282012 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1xLjDCxZB2 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1xLjDCxZB2 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1xLjDCxZB2 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1xLjDCxZB2' 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=282187 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 282187 /var/tmp/bdevperf.sock 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 282187 ']' 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:19.613 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.614 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:19.614 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.614 [2024-07-25 15:16:11.631542] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:19.614 [2024-07-25 15:16:11.631598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282187 ] 00:20:19.614 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.614 [2024-07-25 15:16:11.680220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.614 [2024-07-25 15:16:11.732058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1xLjDCxZB2 00:20:20.557 [2024-07-25 15:16:12.536741] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.557 [2024-07-25 15:16:12.536801] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:20.557 [2024-07-25 15:16:12.543493] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:20.557 [2024-07-25 15:16:12.543511] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:20.557 [2024-07-25 15:16:12.543528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:20.557 [2024-07-25 15:16:12.543965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1ec0 (107): Transport endpoint is not connected 00:20:20.557 [2024-07-25 15:16:12.544960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1ec0 (9): Bad file descriptor 00:20:20.557 [2024-07-25 15:16:12.545961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:20.557 [2024-07-25 15:16:12.545969] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:20.557 [2024-07-25 15:16:12.545976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:20.557 request: 00:20:20.557 { 00:20:20.557 "name": "TLSTEST", 00:20:20.557 "trtype": "tcp", 00:20:20.557 "traddr": "10.0.0.2", 00:20:20.557 "adrfam": "ipv4", 00:20:20.557 "trsvcid": "4420", 00:20:20.557 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.557 "prchk_reftag": false, 00:20:20.557 "prchk_guard": false, 00:20:20.557 "hdgst": false, 00:20:20.557 "ddgst": false, 00:20:20.557 "psk": "/tmp/tmp.1xLjDCxZB2", 00:20:20.557 "method": "bdev_nvme_attach_controller", 00:20:20.557 "req_id": 1 00:20:20.557 } 00:20:20.557 Got JSON-RPC error response 00:20:20.557 response: 00:20:20.557 { 00:20:20.557 "code": -5, 00:20:20.557 "message": "Input/output error" 00:20:20.557 } 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 282187 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 282187 ']' 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 282187 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282187 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282187' 00:20:20.557 killing process with pid 282187 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 282187 00:20:20.557 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.557 00:20:20.557 Latency(us) 00:20:20.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.557 =================================================================================================================== 00:20:20.557 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.557 [2024-07-25 15:16:12.618553] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 282187 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:20.557 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=282523 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 282523 /var/tmp/bdevperf.sock 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 282523 ']' 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.558 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.819 [2024-07-25 15:16:12.775120] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:20.819 [2024-07-25 15:16:12.775174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282523 ] 00:20:20.819 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.819 [2024-07-25 15:16:12.825002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.819 [2024-07-25 15:16:12.876175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.391 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.391 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:21.391 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:21.652 [2024-07-25 15:16:13.696338] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:21.652 [2024-07-25 15:16:13.698592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6764a0 (9): Bad file descriptor 00:20:21.652 [2024-07-25 15:16:13.699591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.652 [2024-07-25 15:16:13.699600] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:21.653 [2024-07-25 15:16:13.699607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.653 request: 00:20:21.653 { 00:20:21.653 "name": "TLSTEST", 00:20:21.653 "trtype": "tcp", 00:20:21.653 "traddr": "10.0.0.2", 00:20:21.653 "adrfam": "ipv4", 00:20:21.653 "trsvcid": "4420", 00:20:21.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.653 "prchk_reftag": false, 00:20:21.653 "prchk_guard": false, 00:20:21.653 "hdgst": false, 00:20:21.653 "ddgst": false, 00:20:21.653 "method": "bdev_nvme_attach_controller", 00:20:21.653 "req_id": 1 00:20:21.653 } 00:20:21.653 Got JSON-RPC error response 00:20:21.653 response: 00:20:21.653 { 00:20:21.653 "code": -5, 00:20:21.653 "message": "Input/output error" 00:20:21.653 } 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 282523 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 282523 ']' 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 282523 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282523 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282523' 00:20:21.653 killing process with pid 282523 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 282523 00:20:21.653 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.653 00:20:21.653 Latency(us) 00:20:21.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.653 =================================================================================================================== 00:20:21.653 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.653 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 282523 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 276728 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 276728 ']' 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 276728 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 276728 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 276728' 00:20:21.914 killing process with pid 276728 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 276728 00:20:21.914 [2024-07-25 15:16:13.943860] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:21.914 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 276728 00:20:21.914 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:21.914 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:21.914 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.914 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:21.914 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:21.914 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:21.914 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.DcLMvI30P1 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.DcLMvI30P1 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=282829 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 282829 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 282829 ']' 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.175 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.176 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.176 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.176 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.176 [2024-07-25 15:16:14.177112] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:22.176 [2024-07-25 15:16:14.177163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.176 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.176 [2024-07-25 15:16:14.260580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.176 [2024-07-25 15:16:14.317704] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.176 [2024-07-25 15:16:14.317741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.176 [2024-07-25 15:16:14.317747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.176 [2024-07-25 15:16:14.317752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.176 [2024-07-25 15:16:14.317756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.176 [2024-07-25 15:16:14.317772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.118 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.118 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:23.118 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:23.118 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:23.118 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.118 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.118 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.DcLMvI30P1 00:20:23.118 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DcLMvI30P1 00:20:23.118 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:23.118 [2024-07-25 15:16:15.124025] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.118 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:23.118 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:23.379 [2024-07-25 15:16:15.436794] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.379 [2024-07-25 15:16:15.436990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.379 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:23.640 malloc0 00:20:23.640 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:23.640 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DcLMvI30P1 00:20:23.901 [2024-07-25 15:16:15.891895] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:23.901 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DcLMvI30P1 00:20:23.901 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DcLMvI30P1' 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=283226 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 283226 /var/tmp/bdevperf.sock 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 283226 ']' 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.902 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.902 [2024-07-25 15:16:15.964828] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:23.902 [2024-07-25 15:16:15.964888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283226 ] 00:20:23.902 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.902 [2024-07-25 15:16:16.015509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.902 [2024-07-25 15:16:16.067616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.845 15:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.845 15:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:24.845 15:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DcLMvI30P1 00:20:24.845 [2024-07-25 15:16:16.860394] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.845 [2024-07-25 15:16:16.860452] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:24.845 TLSTESTn1 00:20:24.845 15:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:25.106 Running I/O for 10 seconds... 00:20:35.110 00:20:35.110 Latency(us) 00:20:35.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.110 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.110 Verification LBA range: start 0x0 length 0x2000 00:20:35.110 TLSTESTn1 : 10.07 2006.85 7.84 0.00 0.00 63564.84 6171.31 135441.07 00:20:35.110 =================================================================================================================== 00:20:35.110 Total : 2006.85 7.84 0.00 0.00 63564.84 6171.31 135441.07 00:20:35.110 0 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 283226 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 283226 ']' 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 283226 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 283226 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 283226' 00:20:35.110 killing process with pid 283226 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 283226 00:20:35.110 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.110 00:20:35.110 Latency(us) 00:20:35.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.110 =================================================================================================================== 00:20:35.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.110 [2024-07-25 15:16:27.259116] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.110 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 283226 00:20:35.370 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.DcLMvI30P1 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DcLMvI30P1 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DcLMvI30P1 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DcLMvI30P1 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DcLMvI30P1' 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=285265 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 285265 /var/tmp/bdevperf.sock 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 285265 ']' 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:35.371 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.371 [2024-07-25 15:16:27.426897] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:35.371 [2024-07-25 15:16:27.426966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285265 ] 00:20:35.371 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.371 [2024-07-25 15:16:27.483730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.371 [2024-07-25 15:16:27.535095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DcLMvI30P1 00:20:36.313 [2024-07-25 15:16:28.347993] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.313 [2024-07-25 15:16:28.348038] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:36.313 [2024-07-25 15:16:28.348043] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.DcLMvI30P1 00:20:36.313 request: 00:20:36.313 { 00:20:36.313 "name": "TLSTEST", 00:20:36.313 "trtype": "tcp", 00:20:36.313 "traddr": "10.0.0.2", 00:20:36.313 "adrfam": "ipv4", 00:20:36.313 "trsvcid": "4420", 00:20:36.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.313 "prchk_reftag": false, 00:20:36.313 "prchk_guard": false, 00:20:36.313 "hdgst": false, 00:20:36.313 "ddgst": false, 00:20:36.313 "psk": "/tmp/tmp.DcLMvI30P1", 00:20:36.313 "method": "bdev_nvme_attach_controller", 00:20:36.313 "req_id": 1 00:20:36.313 } 00:20:36.313 Got JSON-RPC error response 00:20:36.313 response: 00:20:36.313 { 00:20:36.313 "code": -1, 00:20:36.313 "message": "Operation not permitted" 00:20:36.313 } 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 285265 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 285265 ']' 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 285265 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 285265 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 285265' 00:20:36.313 killing process with pid 285265 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 285265 00:20:36.313 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.313 00:20:36.313 Latency(us) 00:20:36.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.313 =================================================================================================================== 00:20:36.313 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.313 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 285265 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 282829 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 282829 ']' 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 282829 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282829 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282829' 00:20:36.573 killing process with pid 282829 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 282829 00:20:36.573 [2024-07-25 15:16:28.592311] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 282829 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=285605 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 285605 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 285605 ']' 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.573 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.834 [2024-07-25 15:16:28.781284] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:36.834 [2024-07-25 15:16:28.781361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.834 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.834 [2024-07-25 15:16:28.864031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.834 [2024-07-25 15:16:28.921020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.834 [2024-07-25 15:16:28.921056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.834 [2024-07-25 15:16:28.921064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.834 [2024-07-25 15:16:28.921069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.834 [2024-07-25 15:16:28.921072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.834 [2024-07-25 15:16:28.921087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.DcLMvI30P1 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.DcLMvI30P1 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.DcLMvI30P1 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DcLMvI30P1 00:20:37.406 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:37.667 [2024-07-25 15:16:29.719680] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.667 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:37.928 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:37.928 [2024-07-25 15:16:30.020424] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.928 [2024-07-25 15:16:30.020610] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.928 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:38.189 malloc0 00:20:38.189 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:38.189 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DcLMvI30P1 00:20:38.464 [2024-07-25 15:16:30.479161] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:38.464 [2024-07-25 15:16:30.479181] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:38.464 [2024-07-25 15:16:30.479203] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:38.464 request: 00:20:38.464 { 00:20:38.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.464 "host": "nqn.2016-06.io.spdk:host1", 00:20:38.464 "psk": "/tmp/tmp.DcLMvI30P1", 00:20:38.464 "method": "nvmf_subsystem_add_host", 00:20:38.464 "req_id": 1 00:20:38.464 } 00:20:38.464 Got JSON-RPC error response 00:20:38.464 response: 00:20:38.464 { 00:20:38.464 "code": -32603, 00:20:38.464 "message": "Internal error" 00:20:38.464 } 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 285605 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 285605 ']' 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 285605 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 285605 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 285605' 00:20:38.464 killing process with pid 285605 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 285605 00:20:38.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 285605 00:20:38.735 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.DcLMvI30P1 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=285979 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 285979 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 285979 ']' 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.736 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.736 [2024-07-25 15:16:30.737500] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:38.736 [2024-07-25 15:16:30.737553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.736 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.736 [2024-07-25 15:16:30.816553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.736 [2024-07-25 15:16:30.869034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.736 [2024-07-25 15:16:30.869067] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.736 [2024-07-25 15:16:30.869076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.736 [2024-07-25 15:16:30.869082] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.736 [2024-07-25 15:16:30.869086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.736 [2024-07-25 15:16:30.869101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.356 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.356 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:39.356 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.356 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.356 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.617 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.617 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.DcLMvI30P1 00:20:39.617 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DcLMvI30P1 00:20:39.617 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.617 [2024-07-25 15:16:31.706819] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.617 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.878 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.878 [2024-07-25 15:16:31.999535] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.878 [2024-07-25 15:16:31.999722] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.878 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.138 malloc0 00:20:40.138 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.138 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DcLMvI30P1 00:20:40.399 [2024-07-25 15:16:32.422339] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=286335 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 286335 /var/tmp/bdevperf.sock 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 286335 ']' 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:40.399 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.399 [2024-07-25 15:16:32.467893] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:40.399 [2024-07-25 15:16:32.467941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286335 ] 00:20:40.399 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.399 [2024-07-25 15:16:32.517775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.399 [2024-07-25 15:16:32.569910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.661 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:40.661 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:40.661 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DcLMvI30P1 00:20:40.661 [2024-07-25 15:16:32.789520] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.661 [2024-07-25 15:16:32.789583] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.922 TLSTESTn1 00:20:40.922 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:41.183 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:41.183 "subsystems": [ 00:20:41.183 { 00:20:41.183 "subsystem": "keyring", 00:20:41.183 "config": [] 00:20:41.183 }, 00:20:41.183 { 00:20:41.183 "subsystem": "iobuf", 00:20:41.183 "config": [ 00:20:41.183 { 00:20:41.183 "method": "iobuf_set_options", 00:20:41.183 "params": { 00:20:41.183 "small_pool_count": 8192, 00:20:41.183 "large_pool_count": 1024, 00:20:41.183 "small_bufsize": 8192, 00:20:41.183 "large_bufsize": 135168 00:20:41.183 } 00:20:41.183 } 00:20:41.183 ] 00:20:41.183 }, 00:20:41.183 { 00:20:41.183 "subsystem": "sock", 00:20:41.183 "config": [ 00:20:41.183 { 00:20:41.183 "method": "sock_set_default_impl", 00:20:41.183 "params": { 00:20:41.183 "impl_name": "posix" 00:20:41.183 } 00:20:41.183 }, 00:20:41.183 { 00:20:41.183 "method": "sock_impl_set_options", 00:20:41.183 "params": { 00:20:41.183 "impl_name": "ssl", 00:20:41.183 "recv_buf_size": 4096, 00:20:41.183 "send_buf_size": 4096, 00:20:41.183 "enable_recv_pipe": true, 00:20:41.183 "enable_quickack": false, 00:20:41.183 "enable_placement_id": 0, 00:20:41.183 "enable_zerocopy_send_server": true, 00:20:41.183 "enable_zerocopy_send_client": false, 00:20:41.183 "zerocopy_threshold": 0, 00:20:41.183 "tls_version": 0, 00:20:41.183 "enable_ktls": false 00:20:41.183 } 00:20:41.183 }, 00:20:41.183 { 00:20:41.183 "method": "sock_impl_set_options", 00:20:41.183 "params": { 00:20:41.183 "impl_name": "posix", 00:20:41.183 "recv_buf_size": 2097152, 00:20:41.183 "send_buf_size": 2097152, 00:20:41.183 "enable_recv_pipe": true, 00:20:41.183 "enable_quickack": false, 00:20:41.183 "enable_placement_id": 0, 00:20:41.183 "enable_zerocopy_send_server": true, 00:20:41.183 "enable_zerocopy_send_client": false, 00:20:41.183 "zerocopy_threshold": 0, 00:20:41.183 "tls_version": 0, 00:20:41.183 "enable_ktls": false 00:20:41.183 } 00:20:41.183 } 00:20:41.183 ] 00:20:41.183 }, 00:20:41.183 { 00:20:41.183 "subsystem": "vmd", 00:20:41.183 "config": [] 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "subsystem": "accel", 00:20:41.184 "config": [ 00:20:41.184 { 00:20:41.184 "method": "accel_set_options", 00:20:41.184 "params": { 00:20:41.184 "small_cache_size": 128, 00:20:41.184 "large_cache_size": 16, 00:20:41.184 "task_count": 2048, 00:20:41.184 "sequence_count": 2048, 00:20:41.184 "buf_count": 2048 00:20:41.184 } 00:20:41.184 } 00:20:41.184 ] 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "subsystem": "bdev", 00:20:41.184 "config": [ 00:20:41.184 { 00:20:41.184 "method": "bdev_set_options", 00:20:41.184 "params": { 00:20:41.184 "bdev_io_pool_size": 65535, 00:20:41.184 "bdev_io_cache_size": 256, 00:20:41.184 "bdev_auto_examine": true, 00:20:41.184 "iobuf_small_cache_size": 128, 00:20:41.184 "iobuf_large_cache_size": 16 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "bdev_raid_set_options", 00:20:41.184 "params": { 00:20:41.184 "process_window_size_kb": 1024, 00:20:41.184 "process_max_bandwidth_mb_sec": 0 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "bdev_iscsi_set_options", 00:20:41.184 "params": { 00:20:41.184 "timeout_sec": 30 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "bdev_nvme_set_options", 00:20:41.184 "params": { 00:20:41.184 "action_on_timeout": "none", 00:20:41.184 "timeout_us": 0, 00:20:41.184 "timeout_admin_us": 0, 00:20:41.184 "keep_alive_timeout_ms": 10000, 00:20:41.184 "arbitration_burst": 0, 00:20:41.184 "low_priority_weight": 0, 00:20:41.184 "medium_priority_weight": 0, 00:20:41.184 "high_priority_weight": 0, 00:20:41.184 "nvme_adminq_poll_period_us": 10000, 00:20:41.184 "nvme_ioq_poll_period_us": 0, 00:20:41.184 "io_queue_requests": 0, 00:20:41.184 "delay_cmd_submit": true, 00:20:41.184 "transport_retry_count": 4, 00:20:41.184 "bdev_retry_count": 3, 00:20:41.184 "transport_ack_timeout": 0, 00:20:41.184 "ctrlr_loss_timeout_sec": 0, 00:20:41.184 "reconnect_delay_sec": 0, 00:20:41.184 "fast_io_fail_timeout_sec": 0, 00:20:41.184 "disable_auto_failback": false, 00:20:41.184 "generate_uuids": false, 00:20:41.184 "transport_tos": 0, 00:20:41.184 "nvme_error_stat": false, 00:20:41.184 "rdma_srq_size": 0, 00:20:41.184 "io_path_stat": false, 00:20:41.184 "allow_accel_sequence": false, 00:20:41.184 "rdma_max_cq_size": 0, 00:20:41.184 "rdma_cm_event_timeout_ms": 0, 00:20:41.184 "dhchap_digests": [ 00:20:41.184 "sha256", 00:20:41.184 "sha384", 00:20:41.184 "sha512" 00:20:41.184 ], 00:20:41.184 "dhchap_dhgroups": [ 00:20:41.184 "null", 00:20:41.184 "ffdhe2048", 00:20:41.184 "ffdhe3072", 00:20:41.184 "ffdhe4096", 00:20:41.184 "ffdhe6144", 00:20:41.184 "ffdhe8192" 00:20:41.184 ] 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "bdev_nvme_set_hotplug", 00:20:41.184 "params": { 00:20:41.184 "period_us": 100000, 00:20:41.184 "enable": false 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "bdev_malloc_create", 00:20:41.184 "params": { 00:20:41.184 "name": "malloc0", 00:20:41.184 "num_blocks": 8192, 00:20:41.184 "block_size": 4096, 00:20:41.184 "physical_block_size": 4096, 00:20:41.184 "uuid": "8451b4d4-6644-4a6e-8c87-3b425cb04d67", 00:20:41.184 "optimal_io_boundary": 0, 00:20:41.184 "md_size": 0, 00:20:41.184 "dif_type": 0, 00:20:41.184 "dif_is_head_of_md": false, 00:20:41.184 "dif_pi_format": 0 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "bdev_wait_for_examine" 00:20:41.184 } 00:20:41.184 ] 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "subsystem": "nbd", 00:20:41.184 "config": [] 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "subsystem": "scheduler", 00:20:41.184 "config": [ 00:20:41.184 { 00:20:41.184 "method": "framework_set_scheduler", 00:20:41.184 "params": { 00:20:41.184 "name": "static" 00:20:41.184 } 00:20:41.184 } 00:20:41.184 ] 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "subsystem": "nvmf", 00:20:41.184 "config": [ 00:20:41.184 { 00:20:41.184 "method": "nvmf_set_config", 00:20:41.184 "params": { 00:20:41.184 "discovery_filter": "match_any", 00:20:41.184 "admin_cmd_passthru": { 00:20:41.184 "identify_ctrlr": false 00:20:41.184 } 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "nvmf_set_max_subsystems", 00:20:41.184 "params": { 00:20:41.184 "max_subsystems": 1024 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "nvmf_set_crdt", 00:20:41.184 "params": { 00:20:41.184 "crdt1": 0, 00:20:41.184 "crdt2": 0, 00:20:41.184 "crdt3": 0 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "nvmf_create_transport", 00:20:41.184 "params": { 00:20:41.184 "trtype": "TCP", 00:20:41.184 "max_queue_depth": 128, 00:20:41.184 "max_io_qpairs_per_ctrlr": 127, 00:20:41.184 "in_capsule_data_size": 4096, 00:20:41.184 "max_io_size": 131072, 00:20:41.184 "io_unit_size": 131072, 00:20:41.184 "max_aq_depth": 128, 00:20:41.184 "num_shared_buffers": 511, 00:20:41.184 "buf_cache_size": 4294967295, 00:20:41.184 "dif_insert_or_strip": false, 00:20:41.184 "zcopy": false, 00:20:41.184 "c2h_success": false, 00:20:41.184 "sock_priority": 0, 00:20:41.184 "abort_timeout_sec": 1, 00:20:41.184 "ack_timeout": 0, 00:20:41.184 "data_wr_pool_size": 0 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "nvmf_create_subsystem", 00:20:41.184 "params": { 00:20:41.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.184 "allow_any_host": false, 00:20:41.184 "serial_number": "SPDK00000000000001", 00:20:41.184 "model_number": "SPDK bdev Controller", 00:20:41.184 "max_namespaces": 10, 00:20:41.184 "min_cntlid": 1, 00:20:41.184 "max_cntlid": 65519, 00:20:41.184 "ana_reporting": false 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "nvmf_subsystem_add_host", 00:20:41.184 "params": { 00:20:41.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.184 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.184 "psk": "/tmp/tmp.DcLMvI30P1" 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "nvmf_subsystem_add_ns", 00:20:41.184 "params": { 00:20:41.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.184 "namespace": { 00:20:41.184 "nsid": 1, 00:20:41.184 "bdev_name": "malloc0", 00:20:41.184 "nguid": "8451B4D466444A6E8C873B425CB04D67", 00:20:41.184 "uuid": "8451b4d4-6644-4a6e-8c87-3b425cb04d67", 00:20:41.184 "no_auto_visible": false 00:20:41.184 } 00:20:41.184 } 00:20:41.184 }, 00:20:41.184 { 00:20:41.184 "method": "nvmf_subsystem_add_listener", 00:20:41.184 "params": { 00:20:41.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.184 "listen_address": { 00:20:41.184 "trtype": "TCP", 00:20:41.185 "adrfam": "IPv4", 00:20:41.185 "traddr": "10.0.0.2", 00:20:41.185 "trsvcid": "4420" 00:20:41.185 }, 00:20:41.185 "secure_channel": true 00:20:41.185 } 00:20:41.185 } 00:20:41.185 ] 00:20:41.185 } 00:20:41.185 ] 00:20:41.185 }' 00:20:41.185 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:41.445 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:41.445 "subsystems": [ 00:20:41.445 { 00:20:41.445 "subsystem": "keyring", 00:20:41.445 "config": [] 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "subsystem": "iobuf", 00:20:41.445 "config": [ 00:20:41.445 { 00:20:41.445 "method": "iobuf_set_options", 00:20:41.445 "params": { 00:20:41.445 "small_pool_count": 8192, 00:20:41.445 "large_pool_count": 1024, 00:20:41.445 "small_bufsize": 8192, 00:20:41.445 "large_bufsize": 135168 00:20:41.445 } 00:20:41.445 } 00:20:41.445 ] 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "subsystem": "sock", 00:20:41.445 "config": [ 00:20:41.445 { 00:20:41.445 "method": "sock_set_default_impl", 00:20:41.445 "params": { 00:20:41.445 "impl_name": "posix" 00:20:41.445 } 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "method": "sock_impl_set_options", 00:20:41.445 "params": { 00:20:41.445 "impl_name": "ssl", 00:20:41.445 "recv_buf_size": 4096, 00:20:41.445 "send_buf_size": 4096, 00:20:41.445 "enable_recv_pipe": true, 00:20:41.445 "enable_quickack": false, 00:20:41.445 "enable_placement_id": 0, 00:20:41.445 "enable_zerocopy_send_server": true, 00:20:41.445 "enable_zerocopy_send_client": false, 00:20:41.445 "zerocopy_threshold": 0, 00:20:41.445 "tls_version": 0, 00:20:41.445 "enable_ktls": false 00:20:41.445 } 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "method": "sock_impl_set_options", 00:20:41.445 "params": { 00:20:41.445 "impl_name": "posix", 00:20:41.445 "recv_buf_size": 2097152, 00:20:41.445 "send_buf_size": 2097152, 00:20:41.445 "enable_recv_pipe": true, 00:20:41.445 "enable_quickack": false, 00:20:41.445 "enable_placement_id": 0, 00:20:41.445 "enable_zerocopy_send_server": true, 00:20:41.445 "enable_zerocopy_send_client": false, 00:20:41.445 "zerocopy_threshold": 0, 00:20:41.445 "tls_version": 0, 00:20:41.445 "enable_ktls": false 00:20:41.445 } 00:20:41.445 } 00:20:41.445 ] 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "subsystem": "vmd", 00:20:41.445 "config": [] 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "subsystem": "accel", 00:20:41.445 "config": [ 00:20:41.445 { 00:20:41.445 "method": "accel_set_options", 00:20:41.445 "params": { 00:20:41.445 "small_cache_size": 128, 00:20:41.445 "large_cache_size": 16, 00:20:41.445 "task_count": 2048, 00:20:41.445 "sequence_count": 2048, 00:20:41.445 "buf_count": 2048 00:20:41.445 } 00:20:41.445 } 00:20:41.445 ] 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "subsystem": "bdev", 00:20:41.445 "config": [ 00:20:41.445 { 00:20:41.445 "method": "bdev_set_options", 00:20:41.445 "params": { 00:20:41.445 "bdev_io_pool_size": 65535, 00:20:41.445 "bdev_io_cache_size": 256, 00:20:41.445 "bdev_auto_examine": true, 00:20:41.445 "iobuf_small_cache_size": 128, 00:20:41.445 "iobuf_large_cache_size": 16 00:20:41.445 } 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "method": "bdev_raid_set_options", 00:20:41.445 "params": { 00:20:41.445 "process_window_size_kb": 1024, 00:20:41.445 "process_max_bandwidth_mb_sec": 0 00:20:41.445 } 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "method": "bdev_iscsi_set_options", 00:20:41.445 "params": { 00:20:41.445 "timeout_sec": 30 00:20:41.445 } 00:20:41.445 }, 00:20:41.445 { 00:20:41.445 "method": "bdev_nvme_set_options", 00:20:41.445 "params": { 00:20:41.445 "action_on_timeout": "none", 00:20:41.445 "timeout_us": 0, 00:20:41.445 "timeout_admin_us": 0, 00:20:41.445 "keep_alive_timeout_ms": 10000, 00:20:41.445 "arbitration_burst": 0, 00:20:41.445 "low_priority_weight": 0, 00:20:41.445 "medium_priority_weight": 0, 00:20:41.445 "high_priority_weight": 0, 00:20:41.445 "nvme_adminq_poll_period_us": 10000, 00:20:41.445 "nvme_ioq_poll_period_us": 0, 00:20:41.445 "io_queue_requests": 512, 00:20:41.445 "delay_cmd_submit": true, 00:20:41.445 "transport_retry_count": 4, 00:20:41.445 "bdev_retry_count": 3, 00:20:41.445 "transport_ack_timeout": 0, 00:20:41.445 "ctrlr_loss_timeout_sec": 0, 00:20:41.445 "reconnect_delay_sec": 0, 00:20:41.445 "fast_io_fail_timeout_sec": 0, 00:20:41.445 "disable_auto_failback": false, 00:20:41.445 "generate_uuids": false, 00:20:41.445 "transport_tos": 0, 00:20:41.445 "nvme_error_stat": false, 00:20:41.445 "rdma_srq_size": 0, 00:20:41.445 "io_path_stat": false, 00:20:41.445 "allow_accel_sequence": false, 00:20:41.445 "rdma_max_cq_size": 0, 00:20:41.445 "rdma_cm_event_timeout_ms": 0, 00:20:41.445 "dhchap_digests": [ 00:20:41.445 "sha256", 00:20:41.445 "sha384", 00:20:41.445 "sha512" 00:20:41.446 ], 00:20:41.446 "dhchap_dhgroups": [ 00:20:41.446 "null", 00:20:41.446 "ffdhe2048", 00:20:41.446 "ffdhe3072", 00:20:41.446 "ffdhe4096", 00:20:41.446 "ffdhe6144", 00:20:41.446 "ffdhe8192" 00:20:41.446 ] 00:20:41.446 } 00:20:41.446 }, 00:20:41.446 { 00:20:41.446 "method": "bdev_nvme_attach_controller", 00:20:41.446 "params": { 00:20:41.446 "name": "TLSTEST", 00:20:41.446 "trtype": "TCP", 00:20:41.446 "adrfam": "IPv4", 00:20:41.446 "traddr": "10.0.0.2", 00:20:41.446 "trsvcid": "4420", 00:20:41.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.446 "prchk_reftag": false, 00:20:41.446 "prchk_guard": false, 00:20:41.446 "ctrlr_loss_timeout_sec": 0, 00:20:41.446 "reconnect_delay_sec": 0, 00:20:41.446 "fast_io_fail_timeout_sec": 0, 00:20:41.446 "psk": "/tmp/tmp.DcLMvI30P1", 00:20:41.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.446 "hdgst": false, 00:20:41.446 "ddgst": false 00:20:41.446 } 00:20:41.446 }, 00:20:41.446 { 00:20:41.446 "method": "bdev_nvme_set_hotplug", 00:20:41.446 "params": { 00:20:41.446 "period_us": 100000, 00:20:41.446 "enable": false 00:20:41.446 } 00:20:41.446 }, 00:20:41.446 { 00:20:41.446 "method": "bdev_wait_for_examine" 00:20:41.446 } 00:20:41.446 ] 00:20:41.446 }, 00:20:41.446 { 00:20:41.446 "subsystem": "nbd", 00:20:41.446 "config": [] 00:20:41.446 } 00:20:41.446 ] 00:20:41.446 }' 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 286335 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 286335 ']' 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 286335 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286335 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286335' 00:20:41.446 killing process with pid 286335 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 286335 00:20:41.446 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.446 00:20:41.446 Latency(us) 00:20:41.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.446 =================================================================================================================== 00:20:41.446 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.446 [2024-07-25 15:16:33.454756] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 286335 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 285979 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 285979 ']' 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 285979 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 285979 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 285979' 00:20:41.446 killing process with pid 285979 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 285979 00:20:41.446 [2024-07-25 15:16:33.623322] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:41.446 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 285979 00:20:41.707 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:41.707 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.707 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:41.707 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.707 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:41.707 "subsystems": [ 00:20:41.707 { 00:20:41.707 "subsystem": "keyring", 00:20:41.707 "config": [] 00:20:41.707 }, 00:20:41.707 { 00:20:41.707 "subsystem": "iobuf", 00:20:41.707 "config": [ 00:20:41.707 { 00:20:41.707 "method": "iobuf_set_options", 00:20:41.707 "params": { 00:20:41.707 "small_pool_count": 8192, 00:20:41.707 "large_pool_count": 1024, 00:20:41.707 "small_bufsize": 8192, 00:20:41.707 "large_bufsize": 135168 00:20:41.707 } 00:20:41.707 } 00:20:41.707 ] 00:20:41.707 }, 00:20:41.707 { 00:20:41.707 "subsystem": "sock", 00:20:41.707 "config": [ 00:20:41.707 { 00:20:41.707 "method": "sock_set_default_impl", 00:20:41.707 "params": { 00:20:41.707 "impl_name": "posix" 00:20:41.707 } 00:20:41.707 }, 00:20:41.707 { 00:20:41.707 "method": "sock_impl_set_options", 00:20:41.707 "params": { 00:20:41.707 "impl_name": "ssl", 00:20:41.707 "recv_buf_size": 4096, 00:20:41.707 "send_buf_size": 4096, 00:20:41.707 "enable_recv_pipe": true, 00:20:41.707 "enable_quickack": false, 00:20:41.707 "enable_placement_id": 0, 00:20:41.707 "enable_zerocopy_send_server": true, 00:20:41.707 "enable_zerocopy_send_client": false, 00:20:41.707 "zerocopy_threshold": 0, 00:20:41.707 "tls_version": 0, 00:20:41.707 "enable_ktls": false 00:20:41.707 } 00:20:41.707 }, 00:20:41.707 { 00:20:41.707 "method": "sock_impl_set_options", 00:20:41.707 "params": { 00:20:41.707 "impl_name": "posix", 00:20:41.707 "recv_buf_size": 2097152, 00:20:41.707 "send_buf_size": 2097152, 00:20:41.707 "enable_recv_pipe": true, 00:20:41.707 "enable_quickack": false, 00:20:41.707 "enable_placement_id": 0, 00:20:41.707 "enable_zerocopy_send_server": true, 00:20:41.707 "enable_zerocopy_send_client": false, 00:20:41.707 "zerocopy_threshold": 0, 00:20:41.707 "tls_version": 0, 00:20:41.707 "enable_ktls": false 00:20:41.707 } 00:20:41.707 } 00:20:41.707 ] 00:20:41.707 }, 00:20:41.707 { 00:20:41.707 "subsystem": "vmd", 00:20:41.707 "config": [] 00:20:41.707 }, 00:20:41.707 { 00:20:41.707 "subsystem": "accel", 00:20:41.707 "config": [ 00:20:41.707 { 00:20:41.707 "method": "accel_set_options", 00:20:41.707 "params": { 00:20:41.707 "small_cache_size": 128, 00:20:41.707 "large_cache_size": 16, 00:20:41.707 "task_count": 2048, 00:20:41.707 "sequence_count": 2048, 00:20:41.707 "buf_count": 2048 00:20:41.707 } 00:20:41.707 } 00:20:41.707 ] 00:20:41.707 }, 00:20:41.707 { 00:20:41.707 "subsystem": "bdev", 00:20:41.707 "config": [ 00:20:41.707 { 00:20:41.707 "method": "bdev_set_options", 00:20:41.707 "params": { 00:20:41.707 "bdev_io_pool_size": 65535, 00:20:41.707 "bdev_io_cache_size": 256, 00:20:41.707 "bdev_auto_examine": true, 00:20:41.707 "iobuf_small_cache_size": 128, 00:20:41.707 "iobuf_large_cache_size": 16 00:20:41.707 } 00:20:41.707 }, 00:20:41.707 { 00:20:41.707 "method": "bdev_raid_set_options", 00:20:41.707 "params": { 00:20:41.707 "process_window_size_kb": 1024, 00:20:41.708 "process_max_bandwidth_mb_sec": 0 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "bdev_iscsi_set_options", 00:20:41.708 "params": { 00:20:41.708 "timeout_sec": 30 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "bdev_nvme_set_options", 00:20:41.708 "params": { 00:20:41.708 "action_on_timeout": "none", 00:20:41.708 "timeout_us": 0, 00:20:41.708 "timeout_admin_us": 0, 00:20:41.708 "keep_alive_timeout_ms": 10000, 00:20:41.708 "arbitration_burst": 0, 00:20:41.708 "low_priority_weight": 0, 00:20:41.708 "medium_priority_weight": 0, 00:20:41.708 "high_priority_weight": 0, 00:20:41.708 "nvme_adminq_poll_period_us": 10000, 00:20:41.708 "nvme_ioq_poll_period_us": 0, 00:20:41.708 "io_queue_requests": 0, 00:20:41.708 "delay_cmd_submit": true, 00:20:41.708 "transport_retry_count": 4, 00:20:41.708 "bdev_retry_count": 3, 00:20:41.708 "transport_ack_timeout": 0, 00:20:41.708 "ctrlr_loss_timeout_sec": 0, 00:20:41.708 "reconnect_delay_sec": 0, 00:20:41.708 "fast_io_fail_timeout_sec": 0, 00:20:41.708 "disable_auto_failback": false, 00:20:41.708 "generate_uuids": false, 00:20:41.708 "transport_tos": 0, 00:20:41.708 "nvme_error_stat": false, 00:20:41.708 "rdma_srq_size": 0, 00:20:41.708 "io_path_stat": false, 00:20:41.708 "allow_accel_sequence": false, 00:20:41.708 "rdma_max_cq_size": 0, 00:20:41.708 "rdma_cm_event_timeout_ms": 0, 00:20:41.708 "dhchap_digests": [ 00:20:41.708 "sha256", 00:20:41.708 "sha384", 00:20:41.708 "sha512" 00:20:41.708 ], 00:20:41.708 "dhchap_dhgroups": [ 00:20:41.708 "null", 00:20:41.708 "ffdhe2048", 00:20:41.708 "ffdhe3072", 00:20:41.708 "ffdhe4096", 00:20:41.708 "ffdhe6144", 00:20:41.708 "ffdhe8192" 00:20:41.708 ] 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "bdev_nvme_set_hotplug", 00:20:41.708 "params": { 00:20:41.708 "period_us": 100000, 00:20:41.708 "enable": false 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "bdev_malloc_create", 00:20:41.708 "params": { 00:20:41.708 "name": "malloc0", 00:20:41.708 "num_blocks": 8192, 00:20:41.708 "block_size": 4096, 00:20:41.708 "physical_block_size": 4096, 00:20:41.708 "uuid": "8451b4d4-6644-4a6e-8c87-3b425cb04d67", 00:20:41.708 "optimal_io_boundary": 0, 00:20:41.708 "md_size": 0, 00:20:41.708 "dif_type": 0, 00:20:41.708 "dif_is_head_of_md": false, 00:20:41.708 "dif_pi_format": 0 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "bdev_wait_for_examine" 00:20:41.708 } 00:20:41.708 ] 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "subsystem": "nbd", 00:20:41.708 "config": [] 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "subsystem": "scheduler", 00:20:41.708 "config": [ 00:20:41.708 { 00:20:41.708 "method": "framework_set_scheduler", 00:20:41.708 "params": { 00:20:41.708 "name": "static" 00:20:41.708 } 00:20:41.708 } 00:20:41.708 ] 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "subsystem": "nvmf", 00:20:41.708 "config": [ 00:20:41.708 { 00:20:41.708 "method": "nvmf_set_config", 00:20:41.708 "params": { 00:20:41.708 "discovery_filter": "match_any", 00:20:41.708 "admin_cmd_passthru": { 00:20:41.708 "identify_ctrlr": false 00:20:41.708 } 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "nvmf_set_max_subsystems", 00:20:41.708 "params": { 00:20:41.708 "max_subsystems": 1024 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "nvmf_set_crdt", 00:20:41.708 "params": { 00:20:41.708 "crdt1": 0, 00:20:41.708 "crdt2": 0, 00:20:41.708 "crdt3": 0 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "nvmf_create_transport", 00:20:41.708 "params": { 00:20:41.708 "trtype": "TCP", 00:20:41.708 "max_queue_depth": 128, 00:20:41.708 "max_io_qpairs_per_ctrlr": 127, 00:20:41.708 "in_capsule_data_size": 4096, 00:20:41.708 "max_io_size": 131072, 00:20:41.708 "io_unit_size": 131072, 00:20:41.708 "max_aq_depth": 128, 00:20:41.708 "num_shared_buffers": 511, 00:20:41.708 "buf_cache_size": 4294967295, 00:20:41.708 "dif_insert_or_strip": false, 00:20:41.708 "zcopy": false, 00:20:41.708 "c2h_success": false, 00:20:41.708 "sock_priority": 0, 00:20:41.708 "abort_timeout_sec": 1, 00:20:41.708 "ack_timeout": 0, 00:20:41.708 "data_wr_pool_size": 0 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "nvmf_create_subsystem", 00:20:41.708 "params": { 00:20:41.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.708 "allow_any_host": false, 00:20:41.708 "serial_number": "SPDK00000000000001", 00:20:41.708 "model_number": "SPDK bdev Controller", 00:20:41.708 "max_namespaces": 10, 00:20:41.708 "min_cntlid": 1, 00:20:41.708 "max_cntlid": 65519, 00:20:41.708 "ana_reporting": false 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "nvmf_subsystem_add_host", 00:20:41.708 "params": { 00:20:41.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.708 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.708 "psk": "/tmp/tmp.DcLMvI30P1" 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "nvmf_subsystem_add_ns", 00:20:41.708 "params": { 00:20:41.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.708 "namespace": { 00:20:41.708 "nsid": 1, 00:20:41.708 "bdev_name": "malloc0", 00:20:41.708 "nguid": "8451B4D466444A6E8C873B425CB04D67", 00:20:41.708 "uuid": "8451b4d4-6644-4a6e-8c87-3b425cb04d67", 00:20:41.708 "no_auto_visible": false 00:20:41.708 } 00:20:41.708 } 00:20:41.708 }, 00:20:41.708 { 00:20:41.708 "method": "nvmf_subsystem_add_listener", 00:20:41.708 "params": { 00:20:41.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.708 "listen_address": { 00:20:41.708 "trtype": "TCP", 00:20:41.708 "adrfam": "IPv4", 00:20:41.708 "traddr": "10.0.0.2", 00:20:41.708 "trsvcid": "4420" 00:20:41.708 }, 00:20:41.708 "secure_channel": true 00:20:41.708 } 00:20:41.708 } 00:20:41.708 ] 00:20:41.708 } 00:20:41.708 ] 00:20:41.708 }' 00:20:41.708 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=286688 00:20:41.708 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 286688 00:20:41.708 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:41.708 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 286688 ']' 00:20:41.708 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.708 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:41.708 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.708 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:41.708 15:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.708 [2024-07-25 15:16:33.802102] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:41.708 [2024-07-25 15:16:33.802157] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.708 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.708 [2024-07-25 15:16:33.882370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.969 [2024-07-25 15:16:33.935307] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.969 [2024-07-25 15:16:33.935340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.969 [2024-07-25 15:16:33.935345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.969 [2024-07-25 15:16:33.935353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.969 [2024-07-25 15:16:33.935357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.969 [2024-07-25 15:16:33.935401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.969 [2024-07-25 15:16:34.118286] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.969 [2024-07-25 15:16:34.143061] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:41.969 [2024-07-25 15:16:34.159109] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.969 [2024-07-25 15:16:34.159301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=286872 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 286872 /var/tmp/bdevperf.sock 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 286872 ']' 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.542 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:42.542 "subsystems": [ 00:20:42.542 { 00:20:42.542 "subsystem": "keyring", 00:20:42.542 "config": [] 00:20:42.542 }, 00:20:42.542 { 00:20:42.542 "subsystem": "iobuf", 00:20:42.542 "config": [ 00:20:42.542 { 00:20:42.542 "method": "iobuf_set_options", 00:20:42.542 "params": { 00:20:42.542 "small_pool_count": 8192, 00:20:42.542 "large_pool_count": 1024, 00:20:42.542 "small_bufsize": 8192, 00:20:42.542 "large_bufsize": 135168 00:20:42.542 } 00:20:42.542 } 00:20:42.542 ] 00:20:42.542 }, 00:20:42.542 { 00:20:42.542 "subsystem": "sock", 00:20:42.542 "config": [ 00:20:42.542 { 00:20:42.542 "method": "sock_set_default_impl", 00:20:42.542 "params": { 00:20:42.542 "impl_name": "posix" 00:20:42.542 } 00:20:42.542 }, 00:20:42.542 { 00:20:42.542 "method": "sock_impl_set_options", 00:20:42.542 "params": { 00:20:42.542 "impl_name": "ssl", 00:20:42.542 "recv_buf_size": 4096, 00:20:42.542 "send_buf_size": 4096, 00:20:42.542 "enable_recv_pipe": true, 00:20:42.542 "enable_quickack": false, 00:20:42.542 "enable_placement_id": 0, 00:20:42.542 "enable_zerocopy_send_server": true, 00:20:42.542 "enable_zerocopy_send_client": false, 00:20:42.542 "zerocopy_threshold": 0, 00:20:42.542 "tls_version": 0, 00:20:42.542 "enable_ktls": false 00:20:42.542 } 00:20:42.542 }, 00:20:42.542 { 00:20:42.542 "method": "sock_impl_set_options", 00:20:42.542 "params": { 00:20:42.542 "impl_name": "posix", 00:20:42.542 "recv_buf_size": 2097152, 00:20:42.542 "send_buf_size": 2097152, 00:20:42.542 "enable_recv_pipe": true, 00:20:42.542 "enable_quickack": false, 00:20:42.542 "enable_placement_id": 0, 00:20:42.542 "enable_zerocopy_send_server": true, 00:20:42.542 "enable_zerocopy_send_client": false, 00:20:42.542 "zerocopy_threshold": 0, 00:20:42.543 "tls_version": 0, 00:20:42.543 "enable_ktls": false 00:20:42.543 } 00:20:42.543 } 00:20:42.543 ] 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "subsystem": "vmd", 00:20:42.543 "config": [] 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "subsystem": "accel", 00:20:42.543 "config": [ 00:20:42.543 { 00:20:42.543 "method": "accel_set_options", 00:20:42.543 "params": { 00:20:42.543 "small_cache_size": 128, 00:20:42.543 "large_cache_size": 16, 00:20:42.543 "task_count": 2048, 00:20:42.543 "sequence_count": 2048, 00:20:42.543 "buf_count": 2048 00:20:42.543 } 00:20:42.543 } 00:20:42.543 ] 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "subsystem": "bdev", 00:20:42.543 "config": [ 00:20:42.543 { 00:20:42.543 "method": "bdev_set_options", 00:20:42.543 "params": { 00:20:42.543 "bdev_io_pool_size": 65535, 00:20:42.543 "bdev_io_cache_size": 256, 00:20:42.543 "bdev_auto_examine": true, 00:20:42.543 "iobuf_small_cache_size": 128, 00:20:42.543 "iobuf_large_cache_size": 16 00:20:42.543 } 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "method": "bdev_raid_set_options", 00:20:42.543 "params": { 00:20:42.543 "process_window_size_kb": 1024, 00:20:42.543 "process_max_bandwidth_mb_sec": 0 00:20:42.543 } 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "method": "bdev_iscsi_set_options", 00:20:42.543 "params": { 00:20:42.543 "timeout_sec": 30 00:20:42.543 } 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "method": "bdev_nvme_set_options", 00:20:42.543 "params": { 00:20:42.543 "action_on_timeout": "none", 00:20:42.543 "timeout_us": 0, 00:20:42.543 "timeout_admin_us": 0, 00:20:42.543 "keep_alive_timeout_ms": 10000, 00:20:42.543 "arbitration_burst": 0, 00:20:42.543 "low_priority_weight": 0, 00:20:42.543 "medium_priority_weight": 0, 00:20:42.543 "high_priority_weight": 0, 00:20:42.543 "nvme_adminq_poll_period_us": 10000, 00:20:42.543 "nvme_ioq_poll_period_us": 0, 00:20:42.543 "io_queue_requests": 512, 00:20:42.543 "delay_cmd_submit": true, 00:20:42.543 "transport_retry_count": 4, 00:20:42.543 "bdev_retry_count": 3, 00:20:42.543 "transport_ack_timeout": 0, 00:20:42.543 "ctrlr_loss_timeout_sec": 0, 00:20:42.543 "reconnect_delay_sec": 0, 00:20:42.543 "fast_io_fail_timeout_sec": 0, 00:20:42.543 "disable_auto_failback": false, 00:20:42.543 "generate_uuids": false, 00:20:42.543 "transport_tos": 0, 00:20:42.543 "nvme_error_stat": false, 00:20:42.543 "rdma_srq_size": 0, 00:20:42.543 "io_path_stat": false, 00:20:42.543 "allow_accel_sequence": false, 00:20:42.543 "rdma_max_cq_size": 0, 00:20:42.543 "rdma_cm_event_timeout_ms": 0, 00:20:42.543 "dhchap_digests": [ 00:20:42.543 "sha256", 00:20:42.543 "sha384", 00:20:42.543 "sha512" 00:20:42.543 ], 00:20:42.543 "dhchap_dhgroups": [ 00:20:42.543 "null", 00:20:42.543 "ffdhe2048", 00:20:42.543 "ffdhe3072", 00:20:42.543 "ffdhe4096", 00:20:42.543 "ffdhe6144", 00:20:42.543 "ffdhe8192" 00:20:42.543 ] 00:20:42.543 } 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "method": "bdev_nvme_attach_controller", 00:20:42.543 "params": { 00:20:42.543 "name": "TLSTEST", 00:20:42.543 "trtype": "TCP", 00:20:42.543 "adrfam": "IPv4", 00:20:42.543 "traddr": "10.0.0.2", 00:20:42.543 "trsvcid": "4420", 00:20:42.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.543 "prchk_reftag": false, 00:20:42.543 "prchk_guard": false, 00:20:42.543 "ctrlr_loss_timeout_sec": 0, 00:20:42.543 "reconnect_delay_sec": 0, 00:20:42.543 "fast_io_fail_timeout_sec": 0, 00:20:42.543 "psk": "/tmp/tmp.DcLMvI30P1", 00:20:42.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.543 "hdgst": false, 00:20:42.543 "ddgst": false 00:20:42.543 } 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "method": "bdev_nvme_set_hotplug", 00:20:42.543 "params": { 00:20:42.543 "period_us": 100000, 00:20:42.543 "enable": false 00:20:42.543 } 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "method": "bdev_wait_for_examine" 00:20:42.543 } 00:20:42.543 ] 00:20:42.543 }, 00:20:42.543 { 00:20:42.543 "subsystem": "nbd", 00:20:42.543 "config": [] 00:20:42.543 } 00:20:42.543 ] 00:20:42.543 }' 00:20:42.543 [2024-07-25 15:16:34.654152] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:42.543 [2024-07-25 15:16:34.654215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286872 ] 00:20:42.543 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.543 [2024-07-25 15:16:34.703226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.805 [2024-07-25 15:16:34.755574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.805 [2024-07-25 15:16:34.880088] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.805 [2024-07-25 15:16:34.880149] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:43.377 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.377 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:43.378 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:43.378 Running I/O for 10 seconds... 00:20:55.615 00:20:55.615 Latency(us) 00:20:55.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.615 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:55.615 Verification LBA range: start 0x0 length 0x2000 00:20:55.615 TLSTESTn1 : 10.07 2119.86 8.28 0.00 0.00 60187.80 6089.39 96556.37 00:20:55.615 =================================================================================================================== 00:20:55.615 Total : 2119.86 8.28 0.00 0.00 60187.80 6089.39 96556.37 00:20:55.615 0 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 286872 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 286872 ']' 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 286872 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286872 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286872' 00:20:55.615 killing process with pid 286872 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 286872 00:20:55.615 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.615 00:20:55.615 Latency(us) 00:20:55.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.615 =================================================================================================================== 00:20:55.615 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.615 [2024-07-25 15:16:45.648094] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 286872 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 286688 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 286688 ']' 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 286688 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286688 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286688' 00:20:55.615 killing process with pid 286688 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 286688 00:20:55.615 [2024-07-25 15:16:45.814636] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 286688 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=289061 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 289061 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 289061 ']' 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.615 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.615 [2024-07-25 15:16:45.994506] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:55.615 [2024-07-25 15:16:45.994560] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.615 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.615 [2024-07-25 15:16:46.060350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.615 [2024-07-25 15:16:46.124532] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.615 [2024-07-25 15:16:46.124570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.615 [2024-07-25 15:16:46.124577] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.615 [2024-07-25 15:16:46.124584] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.615 [2024-07-25 15:16:46.124590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.615 [2024-07-25 15:16:46.124610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.DcLMvI30P1 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DcLMvI30P1 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.615 [2024-07-25 15:16:46.951313] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.615 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:55.615 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:55.615 [2024-07-25 15:16:47.280138] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.615 [2024-07-25 15:16:47.280353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.615 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.615 malloc0 00:20:55.615 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.615 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DcLMvI30P1 00:20:55.615 [2024-07-25 15:16:47.771947] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:55.615 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:55.615 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=289416 00:20:55.615 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:55.615 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 289416 /var/tmp/bdevperf.sock 00:20:55.615 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 289416 ']' 00:20:55.877 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.877 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.877 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.877 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.877 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.877 [2024-07-25 15:16:47.838133] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:55.877 [2024-07-25 15:16:47.838182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289416 ] 00:20:55.877 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.877 [2024-07-25 15:16:47.912346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.877 [2024-07-25 15:16:47.965818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.448 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.448 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:56.448 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DcLMvI30P1 00:20:56.709 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:56.969 [2024-07-25 15:16:48.911781] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.969 nvme0n1 00:20:56.969 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:56.969 Running I/O for 1 seconds... 00:20:58.356 00:20:58.356 Latency(us) 00:20:58.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.356 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:58.356 Verification LBA range: start 0x0 length 0x2000 00:20:58.356 nvme0n1 : 1.09 1473.81 5.76 0.00 0.00 84146.31 4969.81 133693.44 00:20:58.356 =================================================================================================================== 00:20:58.356 Total : 1473.81 5.76 0.00 0.00 84146.31 4969.81 133693.44 00:20:58.356 0 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 289416 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 289416 ']' 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 289416 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 289416 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 289416' 00:20:58.356 killing process with pid 289416 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 289416 00:20:58.356 Received shutdown signal, test time was about 1.000000 seconds 00:20:58.356 00:20:58.356 Latency(us) 00:20:58.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.356 =================================================================================================================== 00:20:58.356 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 289416 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 289061 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 289061 ']' 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 289061 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 289061 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 289061' 00:20:58.356 killing process with pid 289061 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 289061 00:20:58.356 [2024-07-25 15:16:50.422225] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:58.356 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 289061 00:20:58.617 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:58.617 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.617 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:58.617 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.617 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=290096 00:20:58.618 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 290096 00:20:58.618 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:58.618 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 290096 ']' 00:20:58.618 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.618 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.618 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.618 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.618 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.618 [2024-07-25 15:16:50.627278] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:58.618 [2024-07-25 15:16:50.627339] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.618 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.618 [2024-07-25 15:16:50.691742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.618 [2024-07-25 15:16:50.756888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.618 [2024-07-25 15:16:50.756926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.618 [2024-07-25 15:16:50.756934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.618 [2024-07-25 15:16:50.756940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.618 [2024-07-25 15:16:50.756946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.618 [2024-07-25 15:16:50.756964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.192 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.192 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:59.192 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.192 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.192 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.454 [2024-07-25 15:16:51.423272] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.454 malloc0 00:20:59.454 [2024-07-25 15:16:51.450059] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.454 [2024-07-25 15:16:51.462389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=290139 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 290139 /var/tmp/bdevperf.sock 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 290139 ']' 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.454 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.454 [2024-07-25 15:16:51.536239] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:59.454 [2024-07-25 15:16:51.536285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290139 ] 00:20:59.454 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.454 [2024-07-25 15:16:51.609582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.715 [2024-07-25 15:16:51.663569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.288 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.288 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:00.288 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DcLMvI30P1 00:21:00.288 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:00.548 [2024-07-25 15:16:52.609539] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.548 nvme0n1 00:21:00.548 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.810 Running I/O for 1 seconds... 00:21:01.768 00:21:01.768 Latency(us) 00:21:01.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.769 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.769 Verification LBA range: start 0x0 length 0x2000 00:21:01.769 nvme0n1 : 1.06 1689.01 6.60 0.00 0.00 73892.94 4915.20 115343.36 00:21:01.769 =================================================================================================================== 00:21:01.769 Total : 1689.01 6.60 0.00 0.00 73892.94 4915.20 115343.36 00:21:01.769 0 00:21:01.769 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:01.769 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.769 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.030 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.030 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:02.030 "subsystems": [ 00:21:02.030 { 00:21:02.030 "subsystem": "keyring", 00:21:02.030 "config": [ 00:21:02.030 { 00:21:02.030 "method": "keyring_file_add_key", 00:21:02.030 "params": { 00:21:02.030 "name": "key0", 00:21:02.030 "path": "/tmp/tmp.DcLMvI30P1" 00:21:02.030 } 00:21:02.030 } 00:21:02.030 ] 00:21:02.030 }, 00:21:02.030 { 00:21:02.030 "subsystem": "iobuf", 00:21:02.030 "config": [ 00:21:02.030 { 00:21:02.030 "method": "iobuf_set_options", 00:21:02.030 "params": { 00:21:02.030 "small_pool_count": 8192, 00:21:02.030 "large_pool_count": 1024, 00:21:02.030 "small_bufsize": 8192, 00:21:02.030 "large_bufsize": 135168 00:21:02.030 } 00:21:02.030 } 00:21:02.030 ] 00:21:02.030 }, 00:21:02.030 { 00:21:02.030 "subsystem": "sock", 00:21:02.030 "config": [ 00:21:02.030 { 00:21:02.030 "method": "sock_set_default_impl", 00:21:02.030 "params": { 00:21:02.030 "impl_name": "posix" 00:21:02.030 } 00:21:02.030 }, 00:21:02.030 { 00:21:02.030 "method": "sock_impl_set_options", 00:21:02.030 "params": { 00:21:02.030 "impl_name": "ssl", 00:21:02.030 "recv_buf_size": 4096, 00:21:02.030 "send_buf_size": 4096, 00:21:02.030 "enable_recv_pipe": true, 00:21:02.030 "enable_quickack": false, 00:21:02.030 "enable_placement_id": 0, 00:21:02.030 "enable_zerocopy_send_server": true, 00:21:02.030 "enable_zerocopy_send_client": false, 00:21:02.030 "zerocopy_threshold": 0, 00:21:02.030 "tls_version": 0, 00:21:02.030 "enable_ktls": false 00:21:02.030 } 00:21:02.030 }, 00:21:02.030 { 00:21:02.030 "method": "sock_impl_set_options", 00:21:02.030 "params": { 00:21:02.030 "impl_name": "posix", 00:21:02.030 "recv_buf_size": 2097152, 00:21:02.030 "send_buf_size": 2097152, 00:21:02.030 "enable_recv_pipe": true, 00:21:02.030 "enable_quickack": false, 00:21:02.030 "enable_placement_id": 0, 00:21:02.030 "enable_zerocopy_send_server": true, 00:21:02.030 "enable_zerocopy_send_client": false, 00:21:02.030 "zerocopy_threshold": 0, 00:21:02.030 "tls_version": 0, 00:21:02.030 "enable_ktls": false 00:21:02.030 } 00:21:02.030 } 00:21:02.030 ] 00:21:02.030 }, 00:21:02.030 { 00:21:02.030 "subsystem": "vmd", 00:21:02.030 "config": [] 00:21:02.030 }, 00:21:02.030 { 00:21:02.030 "subsystem": "accel", 00:21:02.030 "config": [ 00:21:02.030 { 00:21:02.030 "method": "accel_set_options", 00:21:02.030 "params": { 00:21:02.031 "small_cache_size": 128, 00:21:02.031 "large_cache_size": 16, 00:21:02.031 "task_count": 2048, 00:21:02.031 "sequence_count": 2048, 00:21:02.031 "buf_count": 2048 00:21:02.031 } 00:21:02.031 } 00:21:02.031 ] 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "subsystem": "bdev", 00:21:02.031 "config": [ 00:21:02.031 { 00:21:02.031 "method": "bdev_set_options", 00:21:02.031 "params": { 00:21:02.031 "bdev_io_pool_size": 65535, 00:21:02.031 "bdev_io_cache_size": 256, 00:21:02.031 "bdev_auto_examine": true, 00:21:02.031 "iobuf_small_cache_size": 128, 00:21:02.031 "iobuf_large_cache_size": 16 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "bdev_raid_set_options", 00:21:02.031 "params": { 00:21:02.031 "process_window_size_kb": 1024, 00:21:02.031 "process_max_bandwidth_mb_sec": 0 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "bdev_iscsi_set_options", 00:21:02.031 "params": { 00:21:02.031 "timeout_sec": 30 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "bdev_nvme_set_options", 00:21:02.031 "params": { 00:21:02.031 "action_on_timeout": "none", 00:21:02.031 "timeout_us": 0, 00:21:02.031 "timeout_admin_us": 0, 00:21:02.031 "keep_alive_timeout_ms": 10000, 00:21:02.031 "arbitration_burst": 0, 00:21:02.031 "low_priority_weight": 0, 00:21:02.031 "medium_priority_weight": 0, 00:21:02.031 "high_priority_weight": 0, 00:21:02.031 "nvme_adminq_poll_period_us": 10000, 00:21:02.031 "nvme_ioq_poll_period_us": 0, 00:21:02.031 "io_queue_requests": 0, 00:21:02.031 "delay_cmd_submit": true, 00:21:02.031 "transport_retry_count": 4, 00:21:02.031 "bdev_retry_count": 3, 00:21:02.031 "transport_ack_timeout": 0, 00:21:02.031 "ctrlr_loss_timeout_sec": 0, 00:21:02.031 "reconnect_delay_sec": 0, 00:21:02.031 "fast_io_fail_timeout_sec": 0, 00:21:02.031 "disable_auto_failback": false, 00:21:02.031 "generate_uuids": false, 00:21:02.031 "transport_tos": 0, 00:21:02.031 "nvme_error_stat": false, 00:21:02.031 "rdma_srq_size": 0, 00:21:02.031 "io_path_stat": false, 00:21:02.031 "allow_accel_sequence": false, 00:21:02.031 "rdma_max_cq_size": 0, 00:21:02.031 "rdma_cm_event_timeout_ms": 0, 00:21:02.031 "dhchap_digests": [ 00:21:02.031 "sha256", 00:21:02.031 "sha384", 00:21:02.031 "sha512" 00:21:02.031 ], 00:21:02.031 "dhchap_dhgroups": [ 00:21:02.031 "null", 00:21:02.031 "ffdhe2048", 00:21:02.031 "ffdhe3072", 00:21:02.031 "ffdhe4096", 00:21:02.031 "ffdhe6144", 00:21:02.031 "ffdhe8192" 00:21:02.031 ] 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "bdev_nvme_set_hotplug", 00:21:02.031 "params": { 00:21:02.031 "period_us": 100000, 00:21:02.031 "enable": false 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "bdev_malloc_create", 00:21:02.031 "params": { 00:21:02.031 "name": "malloc0", 00:21:02.031 "num_blocks": 8192, 00:21:02.031 "block_size": 4096, 00:21:02.031 "physical_block_size": 4096, 00:21:02.031 "uuid": "d7c8dbea-55b2-4d53-acd7-a94bfd01ab5c", 00:21:02.031 "optimal_io_boundary": 0, 00:21:02.031 "md_size": 0, 00:21:02.031 "dif_type": 0, 00:21:02.031 "dif_is_head_of_md": false, 00:21:02.031 "dif_pi_format": 0 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "bdev_wait_for_examine" 00:21:02.031 } 00:21:02.031 ] 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "subsystem": "nbd", 00:21:02.031 "config": [] 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "subsystem": "scheduler", 00:21:02.031 "config": [ 00:21:02.031 { 00:21:02.031 "method": "framework_set_scheduler", 00:21:02.031 "params": { 00:21:02.031 "name": "static" 00:21:02.031 } 00:21:02.031 } 00:21:02.031 ] 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "subsystem": "nvmf", 00:21:02.031 "config": [ 00:21:02.031 { 00:21:02.031 "method": "nvmf_set_config", 00:21:02.031 "params": { 00:21:02.031 "discovery_filter": "match_any", 00:21:02.031 "admin_cmd_passthru": { 00:21:02.031 "identify_ctrlr": false 00:21:02.031 } 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "nvmf_set_max_subsystems", 00:21:02.031 "params": { 00:21:02.031 "max_subsystems": 1024 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "nvmf_set_crdt", 00:21:02.031 "params": { 00:21:02.031 "crdt1": 0, 00:21:02.031 "crdt2": 0, 00:21:02.031 "crdt3": 0 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "nvmf_create_transport", 00:21:02.031 "params": { 00:21:02.031 "trtype": "TCP", 00:21:02.031 "max_queue_depth": 128, 00:21:02.031 "max_io_qpairs_per_ctrlr": 127, 00:21:02.031 "in_capsule_data_size": 4096, 00:21:02.031 "max_io_size": 131072, 00:21:02.031 "io_unit_size": 131072, 00:21:02.031 "max_aq_depth": 128, 00:21:02.031 "num_shared_buffers": 511, 00:21:02.031 "buf_cache_size": 4294967295, 00:21:02.031 "dif_insert_or_strip": false, 00:21:02.031 "zcopy": false, 00:21:02.031 "c2h_success": false, 00:21:02.031 "sock_priority": 0, 00:21:02.031 "abort_timeout_sec": 1, 00:21:02.031 "ack_timeout": 0, 00:21:02.031 "data_wr_pool_size": 0 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "nvmf_create_subsystem", 00:21:02.031 "params": { 00:21:02.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.031 "allow_any_host": false, 00:21:02.031 "serial_number": "00000000000000000000", 00:21:02.031 "model_number": "SPDK bdev Controller", 00:21:02.031 "max_namespaces": 32, 00:21:02.031 "min_cntlid": 1, 00:21:02.031 "max_cntlid": 65519, 00:21:02.031 "ana_reporting": false 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "nvmf_subsystem_add_host", 00:21:02.031 "params": { 00:21:02.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.031 "host": "nqn.2016-06.io.spdk:host1", 00:21:02.031 "psk": "key0" 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "nvmf_subsystem_add_ns", 00:21:02.031 "params": { 00:21:02.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.031 "namespace": { 00:21:02.031 "nsid": 1, 00:21:02.031 "bdev_name": "malloc0", 00:21:02.031 "nguid": "D7C8DBEA55B24D53ACD7A94BFD01AB5C", 00:21:02.031 "uuid": "d7c8dbea-55b2-4d53-acd7-a94bfd01ab5c", 00:21:02.031 "no_auto_visible": false 00:21:02.031 } 00:21:02.031 } 00:21:02.031 }, 00:21:02.031 { 00:21:02.031 "method": "nvmf_subsystem_add_listener", 00:21:02.031 "params": { 00:21:02.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.031 "listen_address": { 00:21:02.031 "trtype": "TCP", 00:21:02.031 "adrfam": "IPv4", 00:21:02.031 "traddr": "10.0.0.2", 00:21:02.031 "trsvcid": "4420" 00:21:02.031 }, 00:21:02.031 "secure_channel": false, 00:21:02.031 "sock_impl": "ssl" 00:21:02.031 } 00:21:02.031 } 00:21:02.031 ] 00:21:02.031 } 00:21:02.031 ] 00:21:02.031 }' 00:21:02.031 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:02.293 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:02.293 "subsystems": [ 00:21:02.293 { 00:21:02.293 "subsystem": "keyring", 00:21:02.293 "config": [ 00:21:02.293 { 00:21:02.293 "method": "keyring_file_add_key", 00:21:02.293 "params": { 00:21:02.293 "name": "key0", 00:21:02.293 "path": "/tmp/tmp.DcLMvI30P1" 00:21:02.293 } 00:21:02.293 } 00:21:02.293 ] 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "subsystem": "iobuf", 00:21:02.293 "config": [ 00:21:02.293 { 00:21:02.293 "method": "iobuf_set_options", 00:21:02.293 "params": { 00:21:02.293 "small_pool_count": 8192, 00:21:02.293 "large_pool_count": 1024, 00:21:02.293 "small_bufsize": 8192, 00:21:02.293 "large_bufsize": 135168 00:21:02.293 } 00:21:02.293 } 00:21:02.293 ] 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "subsystem": "sock", 00:21:02.293 "config": [ 00:21:02.293 { 00:21:02.293 "method": "sock_set_default_impl", 00:21:02.293 "params": { 00:21:02.293 "impl_name": "posix" 00:21:02.293 } 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "method": "sock_impl_set_options", 00:21:02.293 "params": { 00:21:02.293 "impl_name": "ssl", 00:21:02.293 "recv_buf_size": 4096, 00:21:02.293 "send_buf_size": 4096, 00:21:02.293 "enable_recv_pipe": true, 00:21:02.293 "enable_quickack": false, 00:21:02.293 "enable_placement_id": 0, 00:21:02.293 "enable_zerocopy_send_server": true, 00:21:02.293 "enable_zerocopy_send_client": false, 00:21:02.293 "zerocopy_threshold": 0, 00:21:02.293 "tls_version": 0, 00:21:02.293 "enable_ktls": false 00:21:02.293 } 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "method": "sock_impl_set_options", 00:21:02.293 "params": { 00:21:02.293 "impl_name": "posix", 00:21:02.293 "recv_buf_size": 2097152, 00:21:02.293 "send_buf_size": 2097152, 00:21:02.293 "enable_recv_pipe": true, 00:21:02.293 "enable_quickack": false, 00:21:02.293 "enable_placement_id": 0, 00:21:02.293 "enable_zerocopy_send_server": true, 00:21:02.293 "enable_zerocopy_send_client": false, 00:21:02.293 "zerocopy_threshold": 0, 00:21:02.293 "tls_version": 0, 00:21:02.293 "enable_ktls": false 00:21:02.293 } 00:21:02.293 } 00:21:02.293 ] 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "subsystem": "vmd", 00:21:02.293 "config": [] 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "subsystem": "accel", 00:21:02.293 "config": [ 00:21:02.293 { 00:21:02.293 "method": "accel_set_options", 00:21:02.293 "params": { 00:21:02.293 "small_cache_size": 128, 00:21:02.293 "large_cache_size": 16, 00:21:02.293 "task_count": 2048, 00:21:02.293 "sequence_count": 2048, 00:21:02.293 "buf_count": 2048 00:21:02.293 } 00:21:02.293 } 00:21:02.293 ] 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "subsystem": "bdev", 00:21:02.293 "config": [ 00:21:02.293 { 00:21:02.293 "method": "bdev_set_options", 00:21:02.293 "params": { 00:21:02.293 "bdev_io_pool_size": 65535, 00:21:02.293 "bdev_io_cache_size": 256, 00:21:02.293 "bdev_auto_examine": true, 00:21:02.293 "iobuf_small_cache_size": 128, 00:21:02.293 "iobuf_large_cache_size": 16 00:21:02.293 } 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "method": "bdev_raid_set_options", 00:21:02.293 "params": { 00:21:02.293 "process_window_size_kb": 1024, 00:21:02.293 "process_max_bandwidth_mb_sec": 0 00:21:02.293 } 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "method": "bdev_iscsi_set_options", 00:21:02.293 "params": { 00:21:02.293 "timeout_sec": 30 00:21:02.293 } 00:21:02.293 }, 00:21:02.293 { 00:21:02.293 "method": "bdev_nvme_set_options", 00:21:02.293 "params": { 00:21:02.293 "action_on_timeout": "none", 00:21:02.293 "timeout_us": 0, 00:21:02.293 "timeout_admin_us": 0, 00:21:02.293 "keep_alive_timeout_ms": 10000, 00:21:02.293 "arbitration_burst": 0, 00:21:02.293 "low_priority_weight": 0, 00:21:02.293 "medium_priority_weight": 0, 00:21:02.293 "high_priority_weight": 0, 00:21:02.293 "nvme_adminq_poll_period_us": 10000, 00:21:02.293 "nvme_ioq_poll_period_us": 0, 00:21:02.293 "io_queue_requests": 512, 00:21:02.293 "delay_cmd_submit": true, 00:21:02.293 "transport_retry_count": 4, 00:21:02.293 "bdev_retry_count": 3, 00:21:02.293 "transport_ack_timeout": 0, 00:21:02.293 "ctrlr_loss_timeout_sec": 0, 00:21:02.293 "reconnect_delay_sec": 0, 00:21:02.293 "fast_io_fail_timeout_sec": 0, 00:21:02.293 "disable_auto_failback": false, 00:21:02.293 "generate_uuids": false, 00:21:02.293 "transport_tos": 0, 00:21:02.293 "nvme_error_stat": false, 00:21:02.293 "rdma_srq_size": 0, 00:21:02.294 "io_path_stat": false, 00:21:02.294 "allow_accel_sequence": false, 00:21:02.294 "rdma_max_cq_size": 0, 00:21:02.294 "rdma_cm_event_timeout_ms": 0, 00:21:02.294 "dhchap_digests": [ 00:21:02.294 "sha256", 00:21:02.294 "sha384", 00:21:02.294 "sha512" 00:21:02.294 ], 00:21:02.294 "dhchap_dhgroups": [ 00:21:02.294 "null", 00:21:02.294 "ffdhe2048", 00:21:02.294 "ffdhe3072", 00:21:02.294 "ffdhe4096", 00:21:02.294 "ffdhe6144", 00:21:02.294 "ffdhe8192" 00:21:02.294 ] 00:21:02.294 } 00:21:02.294 }, 00:21:02.294 { 00:21:02.294 "method": "bdev_nvme_attach_controller", 00:21:02.294 "params": { 00:21:02.294 "name": "nvme0", 00:21:02.294 "trtype": "TCP", 00:21:02.294 "adrfam": "IPv4", 00:21:02.294 "traddr": "10.0.0.2", 00:21:02.294 "trsvcid": "4420", 00:21:02.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.294 "prchk_reftag": false, 00:21:02.294 "prchk_guard": false, 00:21:02.294 "ctrlr_loss_timeout_sec": 0, 00:21:02.294 "reconnect_delay_sec": 0, 00:21:02.294 "fast_io_fail_timeout_sec": 0, 00:21:02.294 "psk": "key0", 00:21:02.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.294 "hdgst": false, 00:21:02.294 "ddgst": false 00:21:02.294 } 00:21:02.294 }, 00:21:02.294 { 00:21:02.294 "method": "bdev_nvme_set_hotplug", 00:21:02.294 "params": { 00:21:02.294 "period_us": 100000, 00:21:02.294 "enable": false 00:21:02.294 } 00:21:02.294 }, 00:21:02.294 { 00:21:02.294 "method": "bdev_enable_histogram", 00:21:02.294 "params": { 00:21:02.294 "name": "nvme0n1", 00:21:02.294 "enable": true 00:21:02.294 } 00:21:02.294 }, 00:21:02.294 { 00:21:02.294 "method": "bdev_wait_for_examine" 00:21:02.294 } 00:21:02.294 ] 00:21:02.294 }, 00:21:02.294 { 00:21:02.294 "subsystem": "nbd", 00:21:02.294 "config": [] 00:21:02.294 } 00:21:02.294 ] 00:21:02.294 }' 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 290139 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 290139 ']' 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 290139 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 290139 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 290139' 00:21:02.294 killing process with pid 290139 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 290139 00:21:02.294 Received shutdown signal, test time was about 1.000000 seconds 00:21:02.294 00:21:02.294 Latency(us) 00:21:02.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.294 =================================================================================================================== 00:21:02.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 290139 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 290096 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 290096 ']' 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 290096 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 290096 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 290096' 00:21:02.294 killing process with pid 290096 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 290096 00:21:02.294 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 290096 00:21:02.556 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:02.556 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.556 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.556 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:02.556 "subsystems": [ 00:21:02.556 { 00:21:02.556 "subsystem": "keyring", 00:21:02.556 "config": [ 00:21:02.556 { 00:21:02.556 "method": "keyring_file_add_key", 00:21:02.556 "params": { 00:21:02.556 "name": "key0", 00:21:02.556 "path": "/tmp/tmp.DcLMvI30P1" 00:21:02.556 } 00:21:02.556 } 00:21:02.556 ] 00:21:02.556 }, 00:21:02.556 { 00:21:02.556 "subsystem": "iobuf", 00:21:02.556 "config": [ 00:21:02.556 { 00:21:02.556 "method": "iobuf_set_options", 00:21:02.556 "params": { 00:21:02.556 "small_pool_count": 8192, 00:21:02.556 "large_pool_count": 1024, 00:21:02.556 "small_bufsize": 8192, 00:21:02.556 "large_bufsize": 135168 00:21:02.556 } 00:21:02.556 } 00:21:02.556 ] 00:21:02.556 }, 00:21:02.556 { 00:21:02.556 "subsystem": "sock", 00:21:02.556 "config": [ 00:21:02.556 { 00:21:02.556 "method": "sock_set_default_impl", 00:21:02.556 "params": { 00:21:02.556 "impl_name": "posix" 00:21:02.556 } 00:21:02.556 }, 00:21:02.556 { 00:21:02.556 "method": "sock_impl_set_options", 00:21:02.556 "params": { 00:21:02.556 "impl_name": "ssl", 00:21:02.556 "recv_buf_size": 4096, 00:21:02.556 "send_buf_size": 4096, 00:21:02.556 "enable_recv_pipe": true, 00:21:02.556 "enable_quickack": false, 00:21:02.556 "enable_placement_id": 0, 00:21:02.556 "enable_zerocopy_send_server": true, 00:21:02.556 "enable_zerocopy_send_client": false, 00:21:02.556 "zerocopy_threshold": 0, 00:21:02.556 "tls_version": 0, 00:21:02.556 "enable_ktls": false 00:21:02.556 } 00:21:02.556 }, 00:21:02.556 { 00:21:02.556 "method": "sock_impl_set_options", 00:21:02.557 "params": { 00:21:02.557 "impl_name": "posix", 00:21:02.557 "recv_buf_size": 2097152, 00:21:02.557 "send_buf_size": 2097152, 00:21:02.557 "enable_recv_pipe": true, 00:21:02.557 "enable_quickack": false, 00:21:02.557 "enable_placement_id": 0, 00:21:02.557 "enable_zerocopy_send_server": true, 00:21:02.557 "enable_zerocopy_send_client": false, 00:21:02.557 "zerocopy_threshold": 0, 00:21:02.557 "tls_version": 0, 00:21:02.557 "enable_ktls": false 00:21:02.557 } 00:21:02.557 } 00:21:02.557 ] 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "subsystem": "vmd", 00:21:02.557 "config": [] 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "subsystem": "accel", 00:21:02.557 "config": [ 00:21:02.557 { 00:21:02.557 "method": "accel_set_options", 00:21:02.557 "params": { 00:21:02.557 "small_cache_size": 128, 00:21:02.557 "large_cache_size": 16, 00:21:02.557 "task_count": 2048, 00:21:02.557 "sequence_count": 2048, 00:21:02.557 "buf_count": 2048 00:21:02.557 } 00:21:02.557 } 00:21:02.557 ] 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "subsystem": "bdev", 00:21:02.557 "config": [ 00:21:02.557 { 00:21:02.557 "method": "bdev_set_options", 00:21:02.557 "params": { 00:21:02.557 "bdev_io_pool_size": 65535, 00:21:02.557 "bdev_io_cache_size": 256, 00:21:02.557 "bdev_auto_examine": true, 00:21:02.557 "iobuf_small_cache_size": 128, 00:21:02.557 "iobuf_large_cache_size": 16 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "bdev_raid_set_options", 00:21:02.557 "params": { 00:21:02.557 "process_window_size_kb": 1024, 00:21:02.557 "process_max_bandwidth_mb_sec": 0 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "bdev_iscsi_set_options", 00:21:02.557 "params": { 00:21:02.557 "timeout_sec": 30 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "bdev_nvme_set_options", 00:21:02.557 "params": { 00:21:02.557 "action_on_timeout": "none", 00:21:02.557 "timeout_us": 0, 00:21:02.557 "timeout_admin_us": 0, 00:21:02.557 "keep_alive_timeout_ms": 10000, 00:21:02.557 "arbitration_burst": 0, 00:21:02.557 "low_priority_weight": 0, 00:21:02.557 "medium_priority_weight": 0, 00:21:02.557 "high_priority_weight": 0, 00:21:02.557 "nvme_adminq_poll_period_us": 10000, 00:21:02.557 "nvme_ioq_poll_period_us": 0, 00:21:02.557 "io_queue_requests": 0, 00:21:02.557 "delay_cmd_submit": true, 00:21:02.557 "transport_retry_count": 4, 00:21:02.557 "bdev_retry_count": 3, 00:21:02.557 "transport_ack_timeout": 0, 00:21:02.557 "ctrlr_loss_timeout_sec": 0, 00:21:02.557 "reconnect_delay_sec": 0, 00:21:02.557 "fast_io_fail_timeout_sec": 0, 00:21:02.557 "disable_auto_failback": false, 00:21:02.557 "generate_uuids": false, 00:21:02.557 "transport_tos": 0, 00:21:02.557 "nvme_error_stat": false, 00:21:02.557 "rdma_srq_size": 0, 00:21:02.557 "io_path_stat": false, 00:21:02.557 "allow_accel_sequence": false, 00:21:02.557 "rdma_max_cq_size": 0, 00:21:02.557 "rdma_cm_event_timeout_ms": 0, 00:21:02.557 "dhchap_digests": [ 00:21:02.557 "sha256", 00:21:02.557 "sha384", 00:21:02.557 "sha512" 00:21:02.557 ], 00:21:02.557 "dhchap_dhgroups": [ 00:21:02.557 "null", 00:21:02.557 "ffdhe2048", 00:21:02.557 "ffdhe3072", 00:21:02.557 "ffdhe4096", 00:21:02.557 "ffdhe6144", 00:21:02.557 "ffdhe8192" 00:21:02.557 ] 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "bdev_nvme_set_hotplug", 00:21:02.557 "params": { 00:21:02.557 "period_us": 100000, 00:21:02.557 "enable": false 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "bdev_malloc_create", 00:21:02.557 "params": { 00:21:02.557 "name": "malloc0", 00:21:02.557 "num_blocks": 8192, 00:21:02.557 "block_size": 4096, 00:21:02.557 "physical_block_size": 4096, 00:21:02.557 "uuid": "d7c8dbea-55b2-4d53-acd7-a94bfd01ab5c", 00:21:02.557 "optimal_io_boundary": 0, 00:21:02.557 "md_size": 0, 00:21:02.557 "dif_type": 0, 00:21:02.557 "dif_is_head_of_md": false, 00:21:02.557 "dif_pi_format": 0 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "bdev_wait_for_examine" 00:21:02.557 } 00:21:02.557 ] 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "subsystem": "nbd", 00:21:02.557 "config": [] 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "subsystem": "scheduler", 00:21:02.557 "config": [ 00:21:02.557 { 00:21:02.557 "method": "framework_set_scheduler", 00:21:02.557 "params": { 00:21:02.557 "name": "static" 00:21:02.557 } 00:21:02.557 } 00:21:02.557 ] 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "subsystem": "nvmf", 00:21:02.557 "config": [ 00:21:02.557 { 00:21:02.557 "method": "nvmf_set_config", 00:21:02.557 "params": { 00:21:02.557 "discovery_filter": "match_any", 00:21:02.557 "admin_cmd_passthru": { 00:21:02.557 "identify_ctrlr": false 00:21:02.557 } 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "nvmf_set_max_subsystems", 00:21:02.557 "params": { 00:21:02.557 "max_subsystems": 1024 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "nvmf_set_crdt", 00:21:02.557 "params": { 00:21:02.557 "crdt1": 0, 00:21:02.557 "crdt2": 0, 00:21:02.557 "crdt3": 0 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "nvmf_create_transport", 00:21:02.557 "params": { 00:21:02.557 "trtype": "TCP", 00:21:02.557 "max_queue_depth": 128, 00:21:02.557 "max_io_qpairs_per_ctrlr": 127, 00:21:02.557 "in_capsule_data_size": 4096, 00:21:02.557 "max_io_size": 131072, 00:21:02.557 "io_unit_size": 131072, 00:21:02.557 "max_aq_depth": 128, 00:21:02.557 "num_shared_buffers": 511, 00:21:02.557 "buf_cache_size": 4294967295, 00:21:02.557 "dif_insert_or_strip": false, 00:21:02.557 "zcopy": false, 00:21:02.557 "c2h_success": false, 00:21:02.557 "sock_priority": 0, 00:21:02.557 "abort_timeout_sec": 1, 00:21:02.557 "ack_timeout": 0, 00:21:02.557 "data_wr_pool_size": 0 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "nvmf_create_subsystem", 00:21:02.557 "params": { 00:21:02.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.557 "allow_any_host": false, 00:21:02.557 "serial_number": "00000000000000000000", 00:21:02.557 "model_number": "SPDK bdev Controller", 00:21:02.557 "max_namespaces": 32, 00:21:02.557 "min_cntlid": 1, 00:21:02.557 "max_cntlid": 65519, 00:21:02.557 "ana_reporting": false 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "nvmf_subsystem_add_host", 00:21:02.557 "params": { 00:21:02.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.557 "host": "nqn.2016-06.io.spdk:host1", 00:21:02.557 "psk": "key0" 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "nvmf_subsystem_add_ns", 00:21:02.557 "params": { 00:21:02.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.557 "namespace": { 00:21:02.557 "nsid": 1, 00:21:02.557 "bdev_name": "malloc0", 00:21:02.557 "nguid": "D7C8DBEA55B24D53ACD7A94BFD01AB5C", 00:21:02.557 "uuid": "d7c8dbea-55b2-4d53-acd7-a94bfd01ab5c", 00:21:02.557 "no_auto_visible": false 00:21:02.557 } 00:21:02.557 } 00:21:02.557 }, 00:21:02.557 { 00:21:02.557 "method": "nvmf_subsystem_add_listener", 00:21:02.557 "params": { 00:21:02.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.557 "listen_address": { 00:21:02.557 "trtype": "TCP", 00:21:02.557 "adrfam": "IPv4", 00:21:02.557 "traddr": "10.0.0.2", 00:21:02.557 "trsvcid": "4420" 00:21:02.557 }, 00:21:02.557 "secure_channel": false, 00:21:02.557 "sock_impl": "ssl" 00:21:02.557 } 00:21:02.557 } 00:21:02.557 ] 00:21:02.557 } 00:21:02.557 ] 00:21:02.557 }' 00:21:02.557 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.557 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=290809 00:21:02.557 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 290809 00:21:02.557 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:02.557 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 290809 ']' 00:21:02.557 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.557 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.557 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.558 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.558 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.558 [2024-07-25 15:16:54.651130] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:02.558 [2024-07-25 15:16:54.651185] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.558 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.558 [2024-07-25 15:16:54.716020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.819 [2024-07-25 15:16:54.778447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.819 [2024-07-25 15:16:54.778488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.819 [2024-07-25 15:16:54.778496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.819 [2024-07-25 15:16:54.778502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.819 [2024-07-25 15:16:54.778507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.819 [2024-07-25 15:16:54.778559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.819 [2024-07-25 15:16:54.975970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.080 [2024-07-25 15:16:55.014776] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.080 [2024-07-25 15:16:55.014994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=291093 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 291093 /var/tmp/bdevperf.sock 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 291093 ']' 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.342 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:03.342 "subsystems": [ 00:21:03.342 { 00:21:03.342 "subsystem": "keyring", 00:21:03.342 "config": [ 00:21:03.342 { 00:21:03.342 "method": "keyring_file_add_key", 00:21:03.342 "params": { 00:21:03.342 "name": "key0", 00:21:03.342 "path": "/tmp/tmp.DcLMvI30P1" 00:21:03.342 } 00:21:03.342 } 00:21:03.342 ] 00:21:03.342 }, 00:21:03.342 { 00:21:03.342 "subsystem": "iobuf", 00:21:03.342 "config": [ 00:21:03.342 { 00:21:03.342 "method": "iobuf_set_options", 00:21:03.342 "params": { 00:21:03.342 "small_pool_count": 8192, 00:21:03.342 "large_pool_count": 1024, 00:21:03.343 "small_bufsize": 8192, 00:21:03.343 "large_bufsize": 135168 00:21:03.343 } 00:21:03.343 } 00:21:03.343 ] 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "subsystem": "sock", 00:21:03.343 "config": [ 00:21:03.343 { 00:21:03.343 "method": "sock_set_default_impl", 00:21:03.343 "params": { 00:21:03.343 "impl_name": "posix" 00:21:03.343 } 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "method": "sock_impl_set_options", 00:21:03.343 "params": { 00:21:03.343 "impl_name": "ssl", 00:21:03.343 "recv_buf_size": 4096, 00:21:03.343 "send_buf_size": 4096, 00:21:03.343 "enable_recv_pipe": true, 00:21:03.343 "enable_quickack": false, 00:21:03.343 "enable_placement_id": 0, 00:21:03.343 "enable_zerocopy_send_server": true, 00:21:03.343 "enable_zerocopy_send_client": false, 00:21:03.343 "zerocopy_threshold": 0, 00:21:03.343 "tls_version": 0, 00:21:03.343 "enable_ktls": false 00:21:03.343 } 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "method": "sock_impl_set_options", 00:21:03.343 "params": { 00:21:03.343 "impl_name": "posix", 00:21:03.343 "recv_buf_size": 2097152, 00:21:03.343 "send_buf_size": 2097152, 00:21:03.343 "enable_recv_pipe": true, 00:21:03.343 "enable_quickack": false, 00:21:03.343 "enable_placement_id": 0, 00:21:03.343 "enable_zerocopy_send_server": true, 00:21:03.343 "enable_zerocopy_send_client": false, 00:21:03.343 "zerocopy_threshold": 0, 00:21:03.343 "tls_version": 0, 00:21:03.343 "enable_ktls": false 00:21:03.343 } 00:21:03.343 } 00:21:03.343 ] 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "subsystem": "vmd", 00:21:03.343 "config": [] 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "subsystem": "accel", 00:21:03.343 "config": [ 00:21:03.343 { 00:21:03.343 "method": "accel_set_options", 00:21:03.343 "params": { 00:21:03.343 "small_cache_size": 128, 00:21:03.343 "large_cache_size": 16, 00:21:03.343 "task_count": 2048, 00:21:03.343 "sequence_count": 2048, 00:21:03.343 "buf_count": 2048 00:21:03.343 } 00:21:03.343 } 00:21:03.343 ] 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "subsystem": "bdev", 00:21:03.343 "config": [ 00:21:03.343 { 00:21:03.343 "method": "bdev_set_options", 00:21:03.343 "params": { 00:21:03.343 "bdev_io_pool_size": 65535, 00:21:03.343 "bdev_io_cache_size": 256, 00:21:03.343 "bdev_auto_examine": true, 00:21:03.343 "iobuf_small_cache_size": 128, 00:21:03.343 "iobuf_large_cache_size": 16 00:21:03.343 } 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "method": "bdev_raid_set_options", 00:21:03.343 "params": { 00:21:03.343 "process_window_size_kb": 1024, 00:21:03.343 "process_max_bandwidth_mb_sec": 0 00:21:03.343 } 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "method": "bdev_iscsi_set_options", 00:21:03.343 "params": { 00:21:03.343 "timeout_sec": 30 00:21:03.343 } 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "method": "bdev_nvme_set_options", 00:21:03.343 "params": { 00:21:03.343 "action_on_timeout": "none", 00:21:03.343 "timeout_us": 0, 00:21:03.343 "timeout_admin_us": 0, 00:21:03.343 "keep_alive_timeout_ms": 10000, 00:21:03.343 "arbitration_burst": 0, 00:21:03.343 "low_priority_weight": 0, 00:21:03.343 "medium_priority_weight": 0, 00:21:03.343 "high_priority_weight": 0, 00:21:03.343 "nvme_adminq_poll_period_us": 10000, 00:21:03.343 "nvme_ioq_poll_period_us": 0, 00:21:03.343 "io_queue_requests": 512, 00:21:03.343 "delay_cmd_submit": true, 00:21:03.343 "transport_retry_count": 4, 00:21:03.343 "bdev_retry_count": 3, 00:21:03.343 "transport_ack_timeout": 0, 00:21:03.343 "ctrlr_loss_timeout_sec": 0, 00:21:03.343 "reconnect_delay_sec": 0, 00:21:03.343 "fast_io_fail_timeout_sec": 0, 00:21:03.343 "disable_auto_failback": false, 00:21:03.343 "generate_uuids": false, 00:21:03.343 "transport_tos": 0, 00:21:03.343 "nvme_error_stat": false, 00:21:03.343 "rdma_srq_size": 0, 00:21:03.343 "io_path_stat": false, 00:21:03.343 "allow_accel_sequence": false, 00:21:03.343 "rdma_max_cq_size": 0, 00:21:03.343 "rdma_cm_event_timeout_ms": 0, 00:21:03.343 "dhchap_digests": [ 00:21:03.343 "sha256", 00:21:03.343 "sha384", 00:21:03.343 "sha512" 00:21:03.343 ], 00:21:03.343 "dhchap_dhgroups": [ 00:21:03.343 "null", 00:21:03.343 "ffdhe2048", 00:21:03.343 "ffdhe3072", 00:21:03.343 "ffdhe4096", 00:21:03.343 "ffdhe6144", 00:21:03.343 "ffdhe8192" 00:21:03.343 ] 00:21:03.343 } 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "method": "bdev_nvme_attach_controller", 00:21:03.343 "params": { 00:21:03.343 "name": "nvme0", 00:21:03.343 "trtype": "TCP", 00:21:03.343 "adrfam": "IPv4", 00:21:03.343 "traddr": "10.0.0.2", 00:21:03.343 "trsvcid": "4420", 00:21:03.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.343 "prchk_reftag": false, 00:21:03.343 "prchk_guard": false, 00:21:03.343 "ctrlr_loss_timeout_sec": 0, 00:21:03.343 "reconnect_delay_sec": 0, 00:21:03.343 "fast_io_fail_timeout_sec": 0, 00:21:03.343 "psk": "key0", 00:21:03.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.343 "hdgst": false, 00:21:03.343 "ddgst": false 00:21:03.343 } 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "method": "bdev_nvme_set_hotplug", 00:21:03.343 "params": { 00:21:03.343 "period_us": 100000, 00:21:03.343 "enable": false 00:21:03.343 } 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "method": "bdev_enable_histogram", 00:21:03.343 "params": { 00:21:03.343 "name": "nvme0n1", 00:21:03.343 "enable": true 00:21:03.343 } 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "method": "bdev_wait_for_examine" 00:21:03.343 } 00:21:03.343 ] 00:21:03.343 }, 00:21:03.343 { 00:21:03.343 "subsystem": "nbd", 00:21:03.343 "config": [] 00:21:03.343 } 00:21:03.343 ] 00:21:03.343 }' 00:21:03.343 [2024-07-25 15:16:55.521157] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:03.343 [2024-07-25 15:16:55.521223] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291093 ] 00:21:03.605 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.605 [2024-07-25 15:16:55.599184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.605 [2024-07-25 15:16:55.652789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.605 [2024-07-25 15:16:55.786125] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.177 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.177 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:04.177 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.177 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:04.439 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.439 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.439 Running I/O for 1 seconds... 00:21:05.830 00:21:05.830 Latency(us) 00:21:05.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.830 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:05.830 Verification LBA range: start 0x0 length 0x2000 00:21:05.830 nvme0n1 : 1.07 1511.30 5.90 0.00 0.00 82221.21 6144.00 142431.57 00:21:05.830 =================================================================================================================== 00:21:05.830 Total : 1511.30 5.90 0.00 0.00 82221.21 6144.00 142431.57 00:21:05.830 0 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:05.830 nvmf_trace.0 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 291093 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 291093 ']' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 291093 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 291093 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 291093' 00:21:05.830 killing process with pid 291093 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 291093 00:21:05.830 Received shutdown signal, test time was about 1.000000 seconds 00:21:05.830 00:21:05.830 Latency(us) 00:21:05.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.830 =================================================================================================================== 00:21:05.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 291093 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.830 rmmod nvme_tcp 00:21:05.830 rmmod nvme_fabrics 00:21:05.830 rmmod nvme_keyring 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 290809 ']' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 290809 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 290809 ']' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 290809 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 290809 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 290809' 00:21:05.830 killing process with pid 290809 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 290809 00:21:05.830 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 290809 00:21:06.092 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.092 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.092 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.092 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.092 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.092 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.092 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.092 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.010 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:08.010 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1xLjDCxZB2 /tmp/tmp.GNXv84ST0t /tmp/tmp.DcLMvI30P1 00:21:08.272 00:21:08.272 real 1m23.954s 00:21:08.272 user 2m6.100s 00:21:08.272 sys 0m30.013s 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.272 ************************************ 00:21:08.272 END TEST nvmf_tls 00:21:08.272 ************************************ 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:08.272 ************************************ 00:21:08.272 START TEST nvmf_fips 00:21:08.272 ************************************ 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:08.272 * Looking for test storage... 00:21:08.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.272 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.273 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:08.535 Error setting digest 00:21:08.535 00A237970E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:08.535 00A237970E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.535 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.686 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.686 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:16.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:16.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:16.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:16.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:16.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.738 ms 00:21:16.687 00:21:16.687 --- 10.0.0.2 ping statistics --- 00:21:16.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.687 rtt min/avg/max/mdev = 0.738/0.738/0.738/0.000 ms 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.517 ms 00:21:16.687 00:21:16.687 --- 10.0.0.1 ping statistics --- 00:21:16.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.687 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:16.687 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=295772 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 295772 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 295772 ']' 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:16.688 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.688 [2024-07-25 15:17:07.955561] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:16.688 [2024-07-25 15:17:07.955633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.688 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.688 [2024-07-25 15:17:08.045315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.688 [2024-07-25 15:17:08.137038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.688 [2024-07-25 15:17:08.137103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.688 [2024-07-25 15:17:08.137111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.688 [2024-07-25 15:17:08.137118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.688 [2024-07-25 15:17:08.137125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.688 [2024-07-25 15:17:08.137150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:16.688 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:16.950 [2024-07-25 15:17:08.925281] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.950 [2024-07-25 15:17:08.941283] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.950 [2024-07-25 15:17:08.941553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.950 [2024-07-25 15:17:08.971324] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:16.950 malloc0 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=295888 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 295888 /var/tmp/bdevperf.sock 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 295888 ']' 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:16.950 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.950 [2024-07-25 15:17:09.065323] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:16.950 [2024-07-25 15:17:09.065401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295888 ] 00:21:16.950 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.950 [2024-07-25 15:17:09.123715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.212 [2024-07-25 15:17:09.189847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.785 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.785 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:17.785 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:18.046 [2024-07-25 15:17:09.978140] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.046 [2024-07-25 15:17:09.978212] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:18.046 TLSTESTn1 00:21:18.046 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.046 Running I/O for 10 seconds... 00:21:30.284 00:21:30.284 Latency(us) 00:21:30.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.284 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:30.284 Verification LBA range: start 0x0 length 0x2000 00:21:30.284 TLSTESTn1 : 10.07 2000.66 7.82 0.00 0.00 63764.79 6116.69 148548.27 00:21:30.284 =================================================================================================================== 00:21:30.284 Total : 2000.66 7.82 0.00 0.00 63764.79 6116.69 148548.27 00:21:30.284 0 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:30.284 nvmf_trace.0 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 295888 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 295888 ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 295888 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 295888 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 295888' 00:21:30.284 killing process with pid 295888 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 295888 00:21:30.284 Received shutdown signal, test time was about 10.000000 seconds 00:21:30.284 00:21:30.284 Latency(us) 00:21:30.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.284 =================================================================================================================== 00:21:30.284 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.284 [2024-07-25 15:17:20.450137] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 295888 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.284 rmmod nvme_tcp 00:21:30.284 rmmod nvme_fabrics 00:21:30.284 rmmod nvme_keyring 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 295772 ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 295772 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 295772 ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 295772 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 295772 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 295772' 00:21:30.284 killing process with pid 295772 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 295772 00:21:30.284 [2024-07-25 15:17:20.677896] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 295772 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.284 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.285 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.859 00:21:30.859 real 0m22.583s 00:21:30.859 user 0m22.819s 00:21:30.859 sys 0m10.478s 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.859 ************************************ 00:21:30.859 END TEST nvmf_fips 00:21:30.859 ************************************ 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.859 15:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:39.003 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:39.003 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:39.003 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:39.003 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.003 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:39.004 ************************************ 00:21:39.004 START TEST nvmf_perf_adq 00:21:39.004 ************************************ 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:39.004 * Looking for test storage... 00:21:39.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.004 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:45.662 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:45.662 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.662 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:45.663 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:45.663 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:45.663 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:46.235 15:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:48.150 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:53.445 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:53.445 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.445 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:53.446 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:53.446 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:53.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:21:53.446 00:21:53.446 --- 10.0.0.2 ping statistics --- 00:21:53.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.446 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:21:53.446 00:21:53.446 --- 10.0.0.1 ping statistics --- 00:21:53.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.446 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=307769 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 307769 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 307769 ']' 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.446 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.708 [2024-07-25 15:17:45.675287] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:53.708 [2024-07-25 15:17:45.675335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.708 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.708 [2024-07-25 15:17:45.741033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.708 [2024-07-25 15:17:45.807012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.708 [2024-07-25 15:17:45.807050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.708 [2024-07-25 15:17:45.807059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.708 [2024-07-25 15:17:45.807065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.708 [2024-07-25 15:17:45.807071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.708 [2024-07-25 15:17:45.807270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.708 [2024-07-25 15:17:45.807491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.708 [2024-07-25 15:17:45.807490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.708 [2024-07-25 15:17:45.807323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.281 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.281 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:54.281 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.281 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.281 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.542 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.542 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:54.542 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:54.542 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:54.542 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.542 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.542 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.542 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:54.542 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.543 [2024-07-25 15:17:46.627580] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.543 Malloc1 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.543 [2024-07-25 15:17:46.686962] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=307995 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:54.543 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:54.543 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:57.091 "tick_rate": 2400000000, 00:21:57.091 "poll_groups": [ 00:21:57.091 { 00:21:57.091 "name": "nvmf_tgt_poll_group_000", 00:21:57.091 "admin_qpairs": 1, 00:21:57.091 "io_qpairs": 1, 00:21:57.091 "current_admin_qpairs": 1, 00:21:57.091 "current_io_qpairs": 1, 00:21:57.091 "pending_bdev_io": 0, 00:21:57.091 "completed_nvme_io": 19724, 00:21:57.091 "transports": [ 00:21:57.091 { 00:21:57.091 "trtype": "TCP" 00:21:57.091 } 00:21:57.091 ] 00:21:57.091 }, 00:21:57.091 { 00:21:57.091 "name": "nvmf_tgt_poll_group_001", 00:21:57.091 "admin_qpairs": 0, 00:21:57.091 "io_qpairs": 1, 00:21:57.091 "current_admin_qpairs": 0, 00:21:57.091 "current_io_qpairs": 1, 00:21:57.091 "pending_bdev_io": 0, 00:21:57.091 "completed_nvme_io": 29441, 00:21:57.091 "transports": [ 00:21:57.091 { 00:21:57.091 "trtype": "TCP" 00:21:57.091 } 00:21:57.091 ] 00:21:57.091 }, 00:21:57.091 { 00:21:57.091 "name": "nvmf_tgt_poll_group_002", 00:21:57.091 "admin_qpairs": 0, 00:21:57.091 "io_qpairs": 1, 00:21:57.091 "current_admin_qpairs": 0, 00:21:57.091 "current_io_qpairs": 1, 00:21:57.091 "pending_bdev_io": 0, 00:21:57.091 "completed_nvme_io": 20042, 00:21:57.091 "transports": [ 00:21:57.091 { 00:21:57.091 "trtype": "TCP" 00:21:57.091 } 00:21:57.091 ] 00:21:57.091 }, 00:21:57.091 { 00:21:57.091 "name": "nvmf_tgt_poll_group_003", 00:21:57.091 "admin_qpairs": 0, 00:21:57.091 "io_qpairs": 1, 00:21:57.091 "current_admin_qpairs": 0, 00:21:57.091 "current_io_qpairs": 1, 00:21:57.091 "pending_bdev_io": 0, 00:21:57.091 "completed_nvme_io": 19867, 00:21:57.091 "transports": [ 00:21:57.091 { 00:21:57.091 "trtype": "TCP" 00:21:57.091 } 00:21:57.091 ] 00:21:57.091 } 00:21:57.091 ] 00:21:57.091 }' 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:57.091 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 307995 00:22:05.236 Initializing NVMe Controllers 00:22:05.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:05.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:05.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:05.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:05.236 Initialization complete. Launching workers. 00:22:05.236 ======================================================== 00:22:05.236 Latency(us) 00:22:05.236 Device Information : IOPS MiB/s Average min max 00:22:05.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11220.90 43.83 5704.92 1787.61 9332.74 00:22:05.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14848.00 58.00 4310.33 1433.47 45735.54 00:22:05.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13999.20 54.68 4571.36 1196.04 11460.92 00:22:05.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13180.20 51.49 4855.58 1269.69 11328.67 00:22:05.236 ======================================================== 00:22:05.236 Total : 53248.29 208.00 4807.80 1196.04 45735.54 00:22:05.236 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:05.236 rmmod nvme_tcp 00:22:05.236 rmmod nvme_fabrics 00:22:05.236 rmmod nvme_keyring 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 307769 ']' 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 307769 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 307769 ']' 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 307769 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 307769 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 307769' 00:22:05.236 killing process with pid 307769 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 307769 00:22:05.236 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 307769 00:22:05.236 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:05.236 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:05.236 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:05.237 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:05.237 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:05.237 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.237 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.237 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.152 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:07.152 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:07.152 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:09.070 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:10.985 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:16.285 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:16.285 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:16.285 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:16.285 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.285 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.286 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.286 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.286 15:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:22:16.286 00:22:16.286 --- 10.0.0.2 ping statistics --- 00:22:16.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.286 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.453 ms 00:22:16.286 00:22:16.286 --- 10.0.0.1 ping statistics --- 00:22:16.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.286 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:16.286 net.core.busy_poll = 1 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:16.286 net.core.busy_read = 1 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=312690 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 312690 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 312690 ']' 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.286 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.286 [2024-07-25 15:18:08.451322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:16.286 [2024-07-25 15:18:08.451373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.556 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.556 [2024-07-25 15:18:08.519661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.556 [2024-07-25 15:18:08.584978] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.556 [2024-07-25 15:18:08.585016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.556 [2024-07-25 15:18:08.585023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.556 [2024-07-25 15:18:08.585030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.556 [2024-07-25 15:18:08.585036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.556 [2024-07-25 15:18:08.588222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.556 [2024-07-25 15:18:08.588280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.556 [2024-07-25 15:18:08.588551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.556 [2024-07-25 15:18:08.588552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.556 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.557 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.557 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:16.557 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:16.557 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.557 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.557 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.557 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:16.557 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.557 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.818 [2024-07-25 15:18:08.800572] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.818 Malloc1 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.818 [2024-07-25 15:18:08.859873] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=312734 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:16.818 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:16.818 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.731 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:18.731 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.731 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.731 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.731 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:18.731 "tick_rate": 2400000000, 00:22:18.731 "poll_groups": [ 00:22:18.731 { 00:22:18.731 "name": "nvmf_tgt_poll_group_000", 00:22:18.731 "admin_qpairs": 1, 00:22:18.731 "io_qpairs": 3, 00:22:18.731 "current_admin_qpairs": 1, 00:22:18.731 "current_io_qpairs": 3, 00:22:18.731 "pending_bdev_io": 0, 00:22:18.731 "completed_nvme_io": 29970, 00:22:18.731 "transports": [ 00:22:18.731 { 00:22:18.731 "trtype": "TCP" 00:22:18.731 } 00:22:18.731 ] 00:22:18.731 }, 00:22:18.731 { 00:22:18.731 "name": "nvmf_tgt_poll_group_001", 00:22:18.731 "admin_qpairs": 0, 00:22:18.731 "io_qpairs": 1, 00:22:18.731 "current_admin_qpairs": 0, 00:22:18.731 "current_io_qpairs": 1, 00:22:18.731 "pending_bdev_io": 0, 00:22:18.731 "completed_nvme_io": 35852, 00:22:18.731 "transports": [ 00:22:18.731 { 00:22:18.731 "trtype": "TCP" 00:22:18.731 } 00:22:18.731 ] 00:22:18.731 }, 00:22:18.731 { 00:22:18.731 "name": "nvmf_tgt_poll_group_002", 00:22:18.731 "admin_qpairs": 0, 00:22:18.731 "io_qpairs": 0, 00:22:18.731 "current_admin_qpairs": 0, 00:22:18.731 "current_io_qpairs": 0, 00:22:18.731 "pending_bdev_io": 0, 00:22:18.731 "completed_nvme_io": 0, 00:22:18.731 "transports": [ 00:22:18.731 { 00:22:18.731 "trtype": "TCP" 00:22:18.731 } 00:22:18.731 ] 00:22:18.731 }, 00:22:18.731 { 00:22:18.731 "name": "nvmf_tgt_poll_group_003", 00:22:18.731 "admin_qpairs": 0, 00:22:18.731 "io_qpairs": 0, 00:22:18.731 "current_admin_qpairs": 0, 00:22:18.731 "current_io_qpairs": 0, 00:22:18.731 "pending_bdev_io": 0, 00:22:18.731 "completed_nvme_io": 0, 00:22:18.731 "transports": [ 00:22:18.731 { 00:22:18.731 "trtype": "TCP" 00:22:18.731 } 00:22:18.731 ] 00:22:18.731 } 00:22:18.731 ] 00:22:18.731 }' 00:22:18.731 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:18.731 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:18.992 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:18.992 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:18.992 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 312734 00:22:27.136 Initializing NVMe Controllers 00:22:27.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:27.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:27.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:27.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:27.136 Initialization complete. Launching workers. 00:22:27.136 ======================================================== 00:22:27.136 Latency(us) 00:22:27.137 Device Information : IOPS MiB/s Average min max 00:22:27.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 17138.68 66.95 3745.57 1445.11 44966.67 00:22:27.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8742.79 34.15 7343.82 1030.17 56805.15 00:22:27.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6095.99 23.81 10500.93 1478.25 60628.88 00:22:27.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5827.19 22.76 10983.51 1580.31 55274.99 00:22:27.137 ======================================================== 00:22:27.137 Total : 37804.66 147.67 6782.66 1030.17 60628.88 00:22:27.137 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.137 rmmod nvme_tcp 00:22:27.137 rmmod nvme_fabrics 00:22:27.137 rmmod nvme_keyring 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 312690 ']' 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 312690 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 312690 ']' 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 312690 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 312690 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 312690' 00:22:27.137 killing process with pid 312690 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 312690 00:22:27.137 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 312690 00:22:27.399 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.399 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.399 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.399 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.399 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.399 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.399 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.399 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:30.706 00:22:30.706 real 0m52.691s 00:22:30.706 user 2m47.086s 00:22:30.706 sys 0m10.625s 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.706 ************************************ 00:22:30.706 END TEST nvmf_perf_adq 00:22:30.706 ************************************ 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:30.706 ************************************ 00:22:30.706 START TEST nvmf_shutdown 00:22:30.706 ************************************ 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:30.706 * Looking for test storage... 00:22:30.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:30.706 ************************************ 00:22:30.706 START TEST nvmf_shutdown_tc1 00:22:30.706 ************************************ 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.706 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:38.854 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:38.854 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:38.854 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:38.854 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:38.855 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:22:38.855 00:22:38.855 --- 10.0.0.2 ping statistics --- 00:22:38.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.855 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:22:38.855 00:22:38.855 --- 10.0.0.1 ping statistics --- 00:22:38.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.855 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=319635 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 319635 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 319635 ']' 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.855 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.855 [2024-07-25 15:18:29.960865] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:38.855 [2024-07-25 15:18:29.960956] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.855 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.855 [2024-07-25 15:18:30.050116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.855 [2024-07-25 15:18:30.151970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.855 [2024-07-25 15:18:30.152035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.855 [2024-07-25 15:18:30.152044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.855 [2024-07-25 15:18:30.152051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.856 [2024-07-25 15:18:30.152057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.856 [2024-07-25 15:18:30.152243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.856 [2024-07-25 15:18:30.152415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.856 [2024-07-25 15:18:30.152579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.856 [2024-07-25 15:18:30.152580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.856 [2024-07-25 15:18:30.790071] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.856 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.856 Malloc1 00:22:38.856 [2024-07-25 15:18:30.893522] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.856 Malloc2 00:22:38.856 Malloc3 00:22:38.856 Malloc4 00:22:38.856 Malloc5 00:22:39.117 Malloc6 00:22:39.117 Malloc7 00:22:39.117 Malloc8 00:22:39.117 Malloc9 00:22:39.117 Malloc10 00:22:39.117 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.117 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:39.117 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.117 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.117 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=320025 00:22:39.117 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 320025 /var/tmp/bdevperf.sock 00:22:39.117 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 320025 ']' 00:22:39.117 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.117 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.118 { 00:22:39.118 "params": { 00:22:39.118 "name": "Nvme$subsystem", 00:22:39.118 "trtype": "$TEST_TRANSPORT", 00:22:39.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.118 "adrfam": "ipv4", 00:22:39.118 "trsvcid": "$NVMF_PORT", 00:22:39.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.118 "hdgst": ${hdgst:-false}, 00:22:39.118 "ddgst": ${ddgst:-false} 00:22:39.118 }, 00:22:39.118 "method": "bdev_nvme_attach_controller" 00:22:39.118 } 00:22:39.118 EOF 00:22:39.118 )") 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.118 { 00:22:39.118 "params": { 00:22:39.118 "name": "Nvme$subsystem", 00:22:39.118 "trtype": "$TEST_TRANSPORT", 00:22:39.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.118 "adrfam": "ipv4", 00:22:39.118 "trsvcid": "$NVMF_PORT", 00:22:39.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.118 "hdgst": ${hdgst:-false}, 00:22:39.118 "ddgst": ${ddgst:-false} 00:22:39.118 }, 00:22:39.118 "method": "bdev_nvme_attach_controller" 00:22:39.118 } 00:22:39.118 EOF 00:22:39.118 )") 00:22:39.118 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.379 { 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme$subsystem", 00:22:39.379 "trtype": "$TEST_TRANSPORT", 00:22:39.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "$NVMF_PORT", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.379 "hdgst": ${hdgst:-false}, 00:22:39.379 "ddgst": ${ddgst:-false} 00:22:39.379 }, 00:22:39.379 "method": "bdev_nvme_attach_controller" 00:22:39.379 } 00:22:39.379 EOF 00:22:39.379 )") 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.379 { 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme$subsystem", 00:22:39.379 "trtype": "$TEST_TRANSPORT", 00:22:39.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "$NVMF_PORT", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.379 "hdgst": ${hdgst:-false}, 00:22:39.379 "ddgst": ${ddgst:-false} 00:22:39.379 }, 00:22:39.379 "method": "bdev_nvme_attach_controller" 00:22:39.379 } 00:22:39.379 EOF 00:22:39.379 )") 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.379 { 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme$subsystem", 00:22:39.379 "trtype": "$TEST_TRANSPORT", 00:22:39.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "$NVMF_PORT", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.379 "hdgst": ${hdgst:-false}, 00:22:39.379 "ddgst": ${ddgst:-false} 00:22:39.379 }, 00:22:39.379 "method": "bdev_nvme_attach_controller" 00:22:39.379 } 00:22:39.379 EOF 00:22:39.379 )") 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.379 { 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme$subsystem", 00:22:39.379 "trtype": "$TEST_TRANSPORT", 00:22:39.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "$NVMF_PORT", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.379 "hdgst": ${hdgst:-false}, 00:22:39.379 "ddgst": ${ddgst:-false} 00:22:39.379 }, 00:22:39.379 "method": "bdev_nvme_attach_controller" 00:22:39.379 } 00:22:39.379 EOF 00:22:39.379 )") 00:22:39.379 [2024-07-25 15:18:31.335549] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:39.379 [2024-07-25 15:18:31.335601] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.379 { 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme$subsystem", 00:22:39.379 "trtype": "$TEST_TRANSPORT", 00:22:39.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "$NVMF_PORT", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.379 "hdgst": ${hdgst:-false}, 00:22:39.379 "ddgst": ${ddgst:-false} 00:22:39.379 }, 00:22:39.379 "method": "bdev_nvme_attach_controller" 00:22:39.379 } 00:22:39.379 EOF 00:22:39.379 )") 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.379 { 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme$subsystem", 00:22:39.379 "trtype": "$TEST_TRANSPORT", 00:22:39.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "$NVMF_PORT", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.379 "hdgst": ${hdgst:-false}, 00:22:39.379 "ddgst": ${ddgst:-false} 00:22:39.379 }, 00:22:39.379 "method": "bdev_nvme_attach_controller" 00:22:39.379 } 00:22:39.379 EOF 00:22:39.379 )") 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.379 { 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme$subsystem", 00:22:39.379 "trtype": "$TEST_TRANSPORT", 00:22:39.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "$NVMF_PORT", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.379 "hdgst": ${hdgst:-false}, 00:22:39.379 "ddgst": ${ddgst:-false} 00:22:39.379 }, 00:22:39.379 "method": "bdev_nvme_attach_controller" 00:22:39.379 } 00:22:39.379 EOF 00:22:39.379 )") 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.379 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.379 { 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme$subsystem", 00:22:39.379 "trtype": "$TEST_TRANSPORT", 00:22:39.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "$NVMF_PORT", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.379 "hdgst": ${hdgst:-false}, 00:22:39.379 "ddgst": ${ddgst:-false} 00:22:39.379 }, 00:22:39.379 "method": "bdev_nvme_attach_controller" 00:22:39.379 } 00:22:39.379 EOF 00:22:39.379 )") 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:39.379 15:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme1", 00:22:39.379 "trtype": "tcp", 00:22:39.379 "traddr": "10.0.0.2", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "4420", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.379 "hdgst": false, 00:22:39.379 "ddgst": false 00:22:39.379 }, 00:22:39.379 "method": "bdev_nvme_attach_controller" 00:22:39.379 },{ 00:22:39.379 "params": { 00:22:39.379 "name": "Nvme2", 00:22:39.379 "trtype": "tcp", 00:22:39.379 "traddr": "10.0.0.2", 00:22:39.379 "adrfam": "ipv4", 00:22:39.379 "trsvcid": "4420", 00:22:39.379 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:39.379 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:39.379 "hdgst": false, 00:22:39.379 "ddgst": false 00:22:39.379 }, 00:22:39.380 "method": "bdev_nvme_attach_controller" 00:22:39.380 },{ 00:22:39.380 "params": { 00:22:39.380 "name": "Nvme3", 00:22:39.380 "trtype": "tcp", 00:22:39.380 "traddr": "10.0.0.2", 00:22:39.380 "adrfam": "ipv4", 00:22:39.380 "trsvcid": "4420", 00:22:39.380 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:39.380 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:39.380 "hdgst": false, 00:22:39.380 "ddgst": false 00:22:39.380 }, 00:22:39.380 "method": "bdev_nvme_attach_controller" 00:22:39.380 },{ 00:22:39.380 "params": { 00:22:39.380 "name": "Nvme4", 00:22:39.380 "trtype": "tcp", 00:22:39.380 "traddr": "10.0.0.2", 00:22:39.380 "adrfam": "ipv4", 00:22:39.380 "trsvcid": "4420", 00:22:39.380 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:39.380 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:39.380 "hdgst": false, 00:22:39.380 "ddgst": false 00:22:39.380 }, 00:22:39.380 "method": "bdev_nvme_attach_controller" 00:22:39.380 },{ 00:22:39.380 "params": { 00:22:39.380 "name": "Nvme5", 00:22:39.380 "trtype": "tcp", 00:22:39.380 "traddr": "10.0.0.2", 00:22:39.380 "adrfam": "ipv4", 00:22:39.380 "trsvcid": "4420", 00:22:39.380 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:39.380 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:39.380 "hdgst": false, 00:22:39.380 "ddgst": false 00:22:39.380 }, 00:22:39.380 "method": "bdev_nvme_attach_controller" 00:22:39.380 },{ 00:22:39.380 "params": { 00:22:39.380 "name": "Nvme6", 00:22:39.380 "trtype": "tcp", 00:22:39.380 "traddr": "10.0.0.2", 00:22:39.380 "adrfam": "ipv4", 00:22:39.380 "trsvcid": "4420", 00:22:39.380 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:39.380 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:39.380 "hdgst": false, 00:22:39.380 "ddgst": false 00:22:39.380 }, 00:22:39.380 "method": "bdev_nvme_attach_controller" 00:22:39.380 },{ 00:22:39.380 "params": { 00:22:39.380 "name": "Nvme7", 00:22:39.380 "trtype": "tcp", 00:22:39.380 "traddr": "10.0.0.2", 00:22:39.380 "adrfam": "ipv4", 00:22:39.380 "trsvcid": "4420", 00:22:39.380 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:39.380 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:39.380 "hdgst": false, 00:22:39.380 "ddgst": false 00:22:39.380 }, 00:22:39.380 "method": "bdev_nvme_attach_controller" 00:22:39.380 },{ 00:22:39.380 "params": { 00:22:39.380 "name": "Nvme8", 00:22:39.380 "trtype": "tcp", 00:22:39.380 "traddr": "10.0.0.2", 00:22:39.380 "adrfam": "ipv4", 00:22:39.380 "trsvcid": "4420", 00:22:39.380 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:39.380 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:39.380 "hdgst": false, 00:22:39.380 "ddgst": false 00:22:39.380 }, 00:22:39.380 "method": "bdev_nvme_attach_controller" 00:22:39.380 },{ 00:22:39.380 "params": { 00:22:39.380 "name": "Nvme9", 00:22:39.380 "trtype": "tcp", 00:22:39.380 "traddr": "10.0.0.2", 00:22:39.380 "adrfam": "ipv4", 00:22:39.380 "trsvcid": "4420", 00:22:39.380 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:39.380 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:39.380 "hdgst": false, 00:22:39.380 "ddgst": false 00:22:39.380 }, 00:22:39.380 "method": "bdev_nvme_attach_controller" 00:22:39.380 },{ 00:22:39.380 "params": { 00:22:39.380 "name": "Nvme10", 00:22:39.380 "trtype": "tcp", 00:22:39.380 "traddr": "10.0.0.2", 00:22:39.380 "adrfam": "ipv4", 00:22:39.380 "trsvcid": "4420", 00:22:39.380 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:39.380 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:39.380 "hdgst": false, 00:22:39.380 "ddgst": false 00:22:39.380 }, 00:22:39.380 "method": "bdev_nvme_attach_controller" 00:22:39.380 }' 00:22:39.380 [2024-07-25 15:18:31.395981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.380 [2024-07-25 15:18:31.460423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.764 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.764 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:40.764 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:40.764 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.764 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.765 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.765 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 320025 00:22:40.765 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:40.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 320025 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:40.765 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 319635 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.708 { 00:22:41.708 "params": { 00:22:41.708 "name": "Nvme$subsystem", 00:22:41.708 "trtype": "$TEST_TRANSPORT", 00:22:41.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.708 "adrfam": "ipv4", 00:22:41.708 "trsvcid": "$NVMF_PORT", 00:22:41.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.708 "hdgst": ${hdgst:-false}, 00:22:41.708 "ddgst": ${ddgst:-false} 00:22:41.708 }, 00:22:41.708 "method": "bdev_nvme_attach_controller" 00:22:41.708 } 00:22:41.708 EOF 00:22:41.708 )") 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.708 { 00:22:41.708 "params": { 00:22:41.708 "name": "Nvme$subsystem", 00:22:41.708 "trtype": "$TEST_TRANSPORT", 00:22:41.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.708 "adrfam": "ipv4", 00:22:41.708 "trsvcid": "$NVMF_PORT", 00:22:41.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.708 "hdgst": ${hdgst:-false}, 00:22:41.708 "ddgst": ${ddgst:-false} 00:22:41.708 }, 00:22:41.708 "method": "bdev_nvme_attach_controller" 00:22:41.708 } 00:22:41.708 EOF 00:22:41.708 )") 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.708 { 00:22:41.708 "params": { 00:22:41.708 "name": "Nvme$subsystem", 00:22:41.708 "trtype": "$TEST_TRANSPORT", 00:22:41.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.708 "adrfam": "ipv4", 00:22:41.708 "trsvcid": "$NVMF_PORT", 00:22:41.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.708 "hdgst": ${hdgst:-false}, 00:22:41.708 "ddgst": ${ddgst:-false} 00:22:41.708 }, 00:22:41.708 "method": "bdev_nvme_attach_controller" 00:22:41.708 } 00:22:41.708 EOF 00:22:41.708 )") 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.708 { 00:22:41.708 "params": { 00:22:41.708 "name": "Nvme$subsystem", 00:22:41.708 "trtype": "$TEST_TRANSPORT", 00:22:41.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.708 "adrfam": "ipv4", 00:22:41.708 "trsvcid": "$NVMF_PORT", 00:22:41.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.708 "hdgst": ${hdgst:-false}, 00:22:41.708 "ddgst": ${ddgst:-false} 00:22:41.708 }, 00:22:41.708 "method": "bdev_nvme_attach_controller" 00:22:41.708 } 00:22:41.708 EOF 00:22:41.708 )") 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.708 { 00:22:41.708 "params": { 00:22:41.708 "name": "Nvme$subsystem", 00:22:41.708 "trtype": "$TEST_TRANSPORT", 00:22:41.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.708 "adrfam": "ipv4", 00:22:41.708 "trsvcid": "$NVMF_PORT", 00:22:41.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.708 "hdgst": ${hdgst:-false}, 00:22:41.708 "ddgst": ${ddgst:-false} 00:22:41.708 }, 00:22:41.708 "method": "bdev_nvme_attach_controller" 00:22:41.708 } 00:22:41.708 EOF 00:22:41.708 )") 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.708 { 00:22:41.708 "params": { 00:22:41.708 "name": "Nvme$subsystem", 00:22:41.708 "trtype": "$TEST_TRANSPORT", 00:22:41.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.708 "adrfam": "ipv4", 00:22:41.708 "trsvcid": "$NVMF_PORT", 00:22:41.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.708 "hdgst": ${hdgst:-false}, 00:22:41.708 "ddgst": ${ddgst:-false} 00:22:41.708 }, 00:22:41.708 "method": "bdev_nvme_attach_controller" 00:22:41.708 } 00:22:41.708 EOF 00:22:41.708 )") 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.708 { 00:22:41.708 "params": { 00:22:41.708 "name": "Nvme$subsystem", 00:22:41.708 "trtype": "$TEST_TRANSPORT", 00:22:41.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.708 "adrfam": "ipv4", 00:22:41.708 "trsvcid": "$NVMF_PORT", 00:22:41.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.708 "hdgst": ${hdgst:-false}, 00:22:41.708 "ddgst": ${ddgst:-false} 00:22:41.708 }, 00:22:41.708 "method": "bdev_nvme_attach_controller" 00:22:41.708 } 00:22:41.708 EOF 00:22:41.708 )") 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.708 [2024-07-25 15:18:33.708376] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:41.708 [2024-07-25 15:18:33.708432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320395 ] 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.708 { 00:22:41.708 "params": { 00:22:41.708 "name": "Nvme$subsystem", 00:22:41.708 "trtype": "$TEST_TRANSPORT", 00:22:41.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.708 "adrfam": "ipv4", 00:22:41.708 "trsvcid": "$NVMF_PORT", 00:22:41.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.708 "hdgst": ${hdgst:-false}, 00:22:41.708 "ddgst": ${ddgst:-false} 00:22:41.708 }, 00:22:41.708 "method": "bdev_nvme_attach_controller" 00:22:41.708 } 00:22:41.708 EOF 00:22:41.708 )") 00:22:41.708 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.709 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.709 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.709 { 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme$subsystem", 00:22:41.709 "trtype": "$TEST_TRANSPORT", 00:22:41.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "$NVMF_PORT", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.709 "hdgst": ${hdgst:-false}, 00:22:41.709 "ddgst": ${ddgst:-false} 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 } 00:22:41.709 EOF 00:22:41.709 )") 00:22:41.709 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.709 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.709 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.709 { 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme$subsystem", 00:22:41.709 "trtype": "$TEST_TRANSPORT", 00:22:41.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "$NVMF_PORT", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.709 "hdgst": ${hdgst:-false}, 00:22:41.709 "ddgst": ${ddgst:-false} 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 } 00:22:41.709 EOF 00:22:41.709 )") 00:22:41.709 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.709 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.709 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:41.709 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:41.709 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme1", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 },{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme2", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 },{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme3", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 },{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme4", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 },{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme5", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 },{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme6", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 },{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme7", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 },{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme8", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 },{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme9", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 },{ 00:22:41.709 "params": { 00:22:41.709 "name": "Nvme10", 00:22:41.709 "trtype": "tcp", 00:22:41.709 "traddr": "10.0.0.2", 00:22:41.709 "adrfam": "ipv4", 00:22:41.709 "trsvcid": "4420", 00:22:41.709 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:41.709 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:41.709 "hdgst": false, 00:22:41.709 "ddgst": false 00:22:41.709 }, 00:22:41.709 "method": "bdev_nvme_attach_controller" 00:22:41.709 }' 00:22:41.709 [2024-07-25 15:18:33.769240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.709 [2024-07-25 15:18:33.833980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.094 Running I/O for 1 seconds... 00:22:44.479 00:22:44.479 Latency(us) 00:22:44.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.479 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme1n1 : 1.02 188.20 11.76 0.00 0.00 336347.59 43035.31 283115.52 00:22:44.479 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme2n1 : 1.14 224.78 14.05 0.00 0.00 277109.97 23592.96 263891.63 00:22:44.479 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme3n1 : 1.11 231.57 14.47 0.00 0.00 263954.56 23592.96 230686.72 00:22:44.479 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme4n1 : 1.10 174.59 10.91 0.00 0.00 343880.82 24248.32 339039.57 00:22:44.479 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme5n1 : 1.19 323.33 20.21 0.00 0.00 183247.36 24357.55 205346.13 00:22:44.479 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme6n1 : 1.11 288.58 18.04 0.00 0.00 200569.34 22719.15 201850.88 00:22:44.479 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme7n1 : 1.22 209.76 13.11 0.00 0.00 263935.79 22719.15 316320.43 00:22:44.479 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme8n1 : 1.14 168.23 10.51 0.00 0.00 332623.64 31238.83 365253.97 00:22:44.479 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme9n1 : 1.18 216.66 13.54 0.00 0.00 254717.23 25012.91 270882.13 00:22:44.479 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.479 Verification LBA range: start 0x0 length 0x400 00:22:44.479 Nvme10n1 : 1.21 211.44 13.21 0.00 0.00 257443.95 13107.20 370496.85 00:22:44.479 =================================================================================================================== 00:22:44.479 Total : 2237.15 139.82 0.00 0.00 260238.33 13107.20 370496.85 00:22:44.479 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:44.479 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:44.479 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:44.480 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.480 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:44.480 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:44.480 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:44.480 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.480 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:44.480 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.480 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.480 rmmod nvme_tcp 00:22:44.480 rmmod nvme_fabrics 00:22:44.741 rmmod nvme_keyring 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 319635 ']' 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 319635 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 319635 ']' 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 319635 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 319635 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 319635' 00:22:44.741 killing process with pid 319635 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 319635 00:22:44.741 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 319635 00:22:45.002 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:45.002 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:45.002 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:45.002 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.002 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.002 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.002 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.002 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.917 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:46.917 00:22:46.917 real 0m16.397s 00:22:46.917 user 0m33.183s 00:22:46.917 sys 0m6.655s 00:22:46.917 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.917 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:46.917 ************************************ 00:22:46.917 END TEST nvmf_shutdown_tc1 00:22:46.917 ************************************ 00:22:46.917 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:46.917 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:46.917 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.917 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:47.233 ************************************ 00:22:47.233 START TEST nvmf_shutdown_tc2 00:22:47.233 ************************************ 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:47.233 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:47.233 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.233 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:47.234 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:47.234 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.234 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.495 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.495 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.495 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:22:47.495 00:22:47.495 --- 10.0.0.2 ping statistics --- 00:22:47.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.495 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:22:47.495 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:22:47.495 00:22:47.496 --- 10.0.0.1 ping statistics --- 00:22:47.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.496 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=321808 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 321808 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 321808 ']' 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.496 15:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.496 [2024-07-25 15:18:39.598969] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:47.496 [2024-07-25 15:18:39.599034] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.496 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.757 [2024-07-25 15:18:39.686692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.757 [2024-07-25 15:18:39.755364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.757 [2024-07-25 15:18:39.755405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.757 [2024-07-25 15:18:39.755411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.757 [2024-07-25 15:18:39.755416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.757 [2024-07-25 15:18:39.755420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.757 [2024-07-25 15:18:39.755531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.757 [2024-07-25 15:18:39.755693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.757 [2024-07-25 15:18:39.755849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.757 [2024-07-25 15:18:39.755851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:48.328 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.329 [2024-07-25 15:18:40.427752] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.329 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.329 Malloc1 00:22:48.589 [2024-07-25 15:18:40.526307] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.589 Malloc2 00:22:48.589 Malloc3 00:22:48.589 Malloc4 00:22:48.589 Malloc5 00:22:48.589 Malloc6 00:22:48.589 Malloc7 00:22:48.850 Malloc8 00:22:48.850 Malloc9 00:22:48.850 Malloc10 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=322015 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 322015 /var/tmp/bdevperf.sock 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 322015 ']' 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.850 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.851 "ddgst": ${ddgst:-false} 00:22:48.851 }, 00:22:48.851 "method": "bdev_nvme_attach_controller" 00:22:48.851 } 00:22:48.851 EOF 00:22:48.851 )") 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.851 "ddgst": ${ddgst:-false} 00:22:48.851 }, 00:22:48.851 "method": "bdev_nvme_attach_controller" 00:22:48.851 } 00:22:48.851 EOF 00:22:48.851 )") 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.851 "ddgst": ${ddgst:-false} 00:22:48.851 }, 00:22:48.851 "method": "bdev_nvme_attach_controller" 00:22:48.851 } 00:22:48.851 EOF 00:22:48.851 )") 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.851 "ddgst": ${ddgst:-false} 00:22:48.851 }, 00:22:48.851 "method": "bdev_nvme_attach_controller" 00:22:48.851 } 00:22:48.851 EOF 00:22:48.851 )") 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.851 "ddgst": ${ddgst:-false} 00:22:48.851 }, 00:22:48.851 "method": "bdev_nvme_attach_controller" 00:22:48.851 } 00:22:48.851 EOF 00:22:48.851 )") 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.851 "ddgst": ${ddgst:-false} 00:22:48.851 }, 00:22:48.851 "method": "bdev_nvme_attach_controller" 00:22:48.851 } 00:22:48.851 EOF 00:22:48.851 )") 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.851 [2024-07-25 15:18:40.969327] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:48.851 [2024-07-25 15:18:40.969382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322015 ] 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.851 "ddgst": ${ddgst:-false} 00:22:48.851 }, 00:22:48.851 "method": "bdev_nvme_attach_controller" 00:22:48.851 } 00:22:48.851 EOF 00:22:48.851 )") 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.851 "ddgst": ${ddgst:-false} 00:22:48.851 }, 00:22:48.851 "method": "bdev_nvme_attach_controller" 00:22:48.851 } 00:22:48.851 EOF 00:22:48.851 )") 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.851 "ddgst": ${ddgst:-false} 00:22:48.851 }, 00:22:48.851 "method": "bdev_nvme_attach_controller" 00:22:48.851 } 00:22:48.851 EOF 00:22:48.851 )") 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.851 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.851 { 00:22:48.851 "params": { 00:22:48.851 "name": "Nvme$subsystem", 00:22:48.851 "trtype": "$TEST_TRANSPORT", 00:22:48.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.851 "adrfam": "ipv4", 00:22:48.851 "trsvcid": "$NVMF_PORT", 00:22:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.851 "hdgst": ${hdgst:-false}, 00:22:48.852 "ddgst": ${ddgst:-false} 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 } 00:22:48.852 EOF 00:22:48.852 )") 00:22:48.852 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.852 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.852 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:48.852 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:48.852 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme1", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 },{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme2", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 },{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme3", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 },{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme4", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 },{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme5", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 },{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme6", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 },{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme7", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 },{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme8", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 },{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme9", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 },{ 00:22:48.852 "params": { 00:22:48.852 "name": "Nvme10", 00:22:48.852 "trtype": "tcp", 00:22:48.852 "traddr": "10.0.0.2", 00:22:48.852 "adrfam": "ipv4", 00:22:48.852 "trsvcid": "4420", 00:22:48.852 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:48.852 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:48.852 "hdgst": false, 00:22:48.852 "ddgst": false 00:22:48.852 }, 00:22:48.852 "method": "bdev_nvme_attach_controller" 00:22:48.852 }' 00:22:48.852 [2024-07-25 15:18:41.029896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.113 [2024-07-25 15:18:41.094635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.500 Running I/O for 10 seconds... 00:22:50.500 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.500 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:50.500 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:50.500 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.500 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:50.762 15:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:51.024 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:51.285 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:51.286 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:51.286 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 322015 00:22:51.286 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 322015 ']' 00:22:51.286 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 322015 00:22:51.547 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:51.547 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.547 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 322015 00:22:51.547 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:51.547 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:51.547 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 322015' 00:22:51.547 killing process with pid 322015 00:22:51.547 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 322015 00:22:51.547 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 322015 00:22:51.547 Received shutdown signal, test time was about 1.077850 seconds 00:22:51.547 00:22:51.547 Latency(us) 00:22:51.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.547 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme1n1 : 1.05 183.20 11.45 0.00 0.00 332817.07 28180.48 311077.55 00:22:51.547 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme2n1 : 0.98 264.07 16.50 0.00 0.00 233940.91 3549.87 218453.33 00:22:51.547 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme3n1 : 1.02 251.61 15.73 0.00 0.00 242110.08 23811.41 253405.87 00:22:51.547 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme4n1 : 1.08 237.71 14.86 0.00 0.00 242510.08 23265.28 270882.13 00:22:51.547 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme5n1 : 0.99 387.63 24.23 0.00 0.00 150323.56 10158.08 172141.23 00:22:51.547 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme6n1 : 1.00 191.09 11.94 0.00 0.00 299623.82 23374.51 283115.52 00:22:51.547 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme7n1 : 1.07 239.48 14.97 0.00 0.00 225890.99 27962.03 249910.61 00:22:51.547 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme8n1 : 1.03 186.09 11.63 0.00 0.00 295920.07 31675.73 311077.55 00:22:51.547 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme9n1 : 1.02 189.11 11.82 0.00 0.00 284088.89 25668.27 332049.07 00:22:51.547 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.547 Verification LBA range: start 0x0 length 0x400 00:22:51.547 Nvme10n1 : 0.98 199.23 12.45 0.00 0.00 260616.64 4642.13 267386.88 00:22:51.547 =================================================================================================================== 00:22:51.547 Total : 2329.22 145.58 0.00 0.00 245927.17 3549.87 332049.07 00:22:51.808 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:52.751 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 321808 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.752 rmmod nvme_tcp 00:22:52.752 rmmod nvme_fabrics 00:22:52.752 rmmod nvme_keyring 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 321808 ']' 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 321808 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 321808 ']' 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 321808 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.752 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321808 00:22:53.013 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:53.013 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:53.013 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321808' 00:22:53.013 killing process with pid 321808 00:22:53.013 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 321808 00:22:53.013 15:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 321808 00:22:53.013 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.013 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.013 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.013 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.013 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.013 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.013 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.013 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.562 00:22:55.562 real 0m8.134s 00:22:55.562 user 0m24.578s 00:22:55.562 sys 0m1.385s 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.562 ************************************ 00:22:55.562 END TEST nvmf_shutdown_tc2 00:22:55.562 ************************************ 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:55.562 ************************************ 00:22:55.562 START TEST nvmf_shutdown_tc3 00:22:55.562 ************************************ 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.562 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:55.563 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:55.563 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:55.563 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:55.563 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.563 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:22:55.564 00:22:55.564 --- 10.0.0.2 ping statistics --- 00:22:55.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.564 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:22:55.564 00:22:55.564 --- 10.0.0.1 ping statistics --- 00:22:55.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.564 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:55.564 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.825 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=323455 00:22:55.825 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 323455 00:22:55.825 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 323455 ']' 00:22:55.825 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:55.825 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.825 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:55.825 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.825 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:55.825 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.825 [2024-07-25 15:18:47.824267] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:55.825 [2024-07-25 15:18:47.824336] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.825 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.825 [2024-07-25 15:18:47.912585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.825 [2024-07-25 15:18:47.973682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.826 [2024-07-25 15:18:47.973717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.826 [2024-07-25 15:18:47.973723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.826 [2024-07-25 15:18:47.973727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.826 [2024-07-25 15:18:47.973731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.826 [2024-07-25 15:18:47.973860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.826 [2024-07-25 15:18:47.974017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.826 [2024-07-25 15:18:47.974171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.826 [2024-07-25 15:18:47.974173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.769 [2024-07-25 15:18:48.642713] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.769 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.770 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.770 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:56.770 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.770 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.770 Malloc1 00:22:56.770 [2024-07-25 15:18:48.741358] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.770 Malloc2 00:22:56.770 Malloc3 00:22:56.770 Malloc4 00:22:56.770 Malloc5 00:22:56.770 Malloc6 00:22:56.770 Malloc7 00:22:57.031 Malloc8 00:22:57.031 Malloc9 00:22:57.031 Malloc10 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=323738 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 323738 /var/tmp/bdevperf.sock 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 323738 ']' 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.031 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.032 { 00:22:57.032 "params": { 00:22:57.032 "name": "Nvme$subsystem", 00:22:57.032 "trtype": "$TEST_TRANSPORT", 00:22:57.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.032 "adrfam": "ipv4", 00:22:57.032 "trsvcid": "$NVMF_PORT", 00:22:57.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.032 "hdgst": ${hdgst:-false}, 00:22:57.032 "ddgst": ${ddgst:-false} 00:22:57.032 }, 00:22:57.032 "method": "bdev_nvme_attach_controller" 00:22:57.032 } 00:22:57.032 EOF 00:22:57.032 )") 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.032 { 00:22:57.032 "params": { 00:22:57.032 "name": "Nvme$subsystem", 00:22:57.032 "trtype": "$TEST_TRANSPORT", 00:22:57.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.032 "adrfam": "ipv4", 00:22:57.032 "trsvcid": "$NVMF_PORT", 00:22:57.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.032 "hdgst": ${hdgst:-false}, 00:22:57.032 "ddgst": ${ddgst:-false} 00:22:57.032 }, 00:22:57.032 "method": "bdev_nvme_attach_controller" 00:22:57.032 } 00:22:57.032 EOF 00:22:57.032 )") 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.032 { 00:22:57.032 "params": { 00:22:57.032 "name": "Nvme$subsystem", 00:22:57.032 "trtype": "$TEST_TRANSPORT", 00:22:57.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.032 "adrfam": "ipv4", 00:22:57.032 "trsvcid": "$NVMF_PORT", 00:22:57.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.032 "hdgst": ${hdgst:-false}, 00:22:57.032 "ddgst": ${ddgst:-false} 00:22:57.032 }, 00:22:57.032 "method": "bdev_nvme_attach_controller" 00:22:57.032 } 00:22:57.032 EOF 00:22:57.032 )") 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.032 { 00:22:57.032 "params": { 00:22:57.032 "name": "Nvme$subsystem", 00:22:57.032 "trtype": "$TEST_TRANSPORT", 00:22:57.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.032 "adrfam": "ipv4", 00:22:57.032 "trsvcid": "$NVMF_PORT", 00:22:57.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.032 "hdgst": ${hdgst:-false}, 00:22:57.032 "ddgst": ${ddgst:-false} 00:22:57.032 }, 00:22:57.032 "method": "bdev_nvme_attach_controller" 00:22:57.032 } 00:22:57.032 EOF 00:22:57.032 )") 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.032 { 00:22:57.032 "params": { 00:22:57.032 "name": "Nvme$subsystem", 00:22:57.032 "trtype": "$TEST_TRANSPORT", 00:22:57.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.032 "adrfam": "ipv4", 00:22:57.032 "trsvcid": "$NVMF_PORT", 00:22:57.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.032 "hdgst": ${hdgst:-false}, 00:22:57.032 "ddgst": ${ddgst:-false} 00:22:57.032 }, 00:22:57.032 "method": "bdev_nvme_attach_controller" 00:22:57.032 } 00:22:57.032 EOF 00:22:57.032 )") 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.032 { 00:22:57.032 "params": { 00:22:57.032 "name": "Nvme$subsystem", 00:22:57.032 "trtype": "$TEST_TRANSPORT", 00:22:57.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.032 "adrfam": "ipv4", 00:22:57.032 "trsvcid": "$NVMF_PORT", 00:22:57.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.032 "hdgst": ${hdgst:-false}, 00:22:57.032 "ddgst": ${ddgst:-false} 00:22:57.032 }, 00:22:57.032 "method": "bdev_nvme_attach_controller" 00:22:57.032 } 00:22:57.032 EOF 00:22:57.032 )") 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.032 [2024-07-25 15:18:49.181924] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:57.032 [2024-07-25 15:18:49.181977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323738 ] 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.032 { 00:22:57.032 "params": { 00:22:57.032 "name": "Nvme$subsystem", 00:22:57.032 "trtype": "$TEST_TRANSPORT", 00:22:57.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.032 "adrfam": "ipv4", 00:22:57.032 "trsvcid": "$NVMF_PORT", 00:22:57.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.032 "hdgst": ${hdgst:-false}, 00:22:57.032 "ddgst": ${ddgst:-false} 00:22:57.032 }, 00:22:57.032 "method": "bdev_nvme_attach_controller" 00:22:57.032 } 00:22:57.032 EOF 00:22:57.032 )") 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.032 { 00:22:57.032 "params": { 00:22:57.032 "name": "Nvme$subsystem", 00:22:57.032 "trtype": "$TEST_TRANSPORT", 00:22:57.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.032 "adrfam": "ipv4", 00:22:57.032 "trsvcid": "$NVMF_PORT", 00:22:57.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.032 "hdgst": ${hdgst:-false}, 00:22:57.032 "ddgst": ${ddgst:-false} 00:22:57.032 }, 00:22:57.032 "method": "bdev_nvme_attach_controller" 00:22:57.032 } 00:22:57.032 EOF 00:22:57.032 )") 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.032 { 00:22:57.032 "params": { 00:22:57.032 "name": "Nvme$subsystem", 00:22:57.032 "trtype": "$TEST_TRANSPORT", 00:22:57.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.032 "adrfam": "ipv4", 00:22:57.032 "trsvcid": "$NVMF_PORT", 00:22:57.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.032 "hdgst": ${hdgst:-false}, 00:22:57.032 "ddgst": ${ddgst:-false} 00:22:57.032 }, 00:22:57.032 "method": "bdev_nvme_attach_controller" 00:22:57.032 } 00:22:57.032 EOF 00:22:57.032 )") 00:22:57.032 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.033 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.033 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.033 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.033 { 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme$subsystem", 00:22:57.033 "trtype": "$TEST_TRANSPORT", 00:22:57.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "$NVMF_PORT", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.033 "hdgst": ${hdgst:-false}, 00:22:57.033 "ddgst": ${ddgst:-false} 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 } 00:22:57.033 EOF 00:22:57.033 )") 00:22:57.033 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:57.033 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:57.033 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:57.033 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme1", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 },{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme2", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 },{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme3", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 },{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme4", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 },{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme5", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 },{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme6", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 },{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme7", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 },{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme8", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 },{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme9", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 },{ 00:22:57.033 "params": { 00:22:57.033 "name": "Nvme10", 00:22:57.033 "trtype": "tcp", 00:22:57.033 "traddr": "10.0.0.2", 00:22:57.033 "adrfam": "ipv4", 00:22:57.033 "trsvcid": "4420", 00:22:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:57.033 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:57.033 "hdgst": false, 00:22:57.033 "ddgst": false 00:22:57.033 }, 00:22:57.033 "method": "bdev_nvme_attach_controller" 00:22:57.033 }' 00:22:57.295 [2024-07-25 15:18:49.241880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.295 [2024-07-25 15:18:49.306574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.680 Running I/O for 10 seconds... 00:22:58.941 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.941 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:58.941 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:58.941 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.941 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:58.941 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:59.202 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:59.202 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:59.202 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:59.202 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:59.202 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.202 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.202 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.463 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:59.463 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:59.463 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 323455 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 323455 ']' 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 323455 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 323455 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 323455' 00:22:59.739 killing process with pid 323455 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 323455 00:22:59.739 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 323455 00:22:59.739 [2024-07-25 15:18:51.762372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.762481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1fe0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.739 [2024-07-25 15:18:51.763909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.763998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.764104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f24a0 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.740 [2024-07-25 15:18:51.765423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.765538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21a90 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.741 [2024-07-25 15:18:51.766634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.766680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21f50 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.767455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22410 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.768514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.768530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.768535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.768540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.768545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.768550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.742 [2024-07-25 15:18:51.768555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.768819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22d90 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.743 [2024-07-25 15:18:51.769811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.769997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.770001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.770005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23730 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.774639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836340 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.774767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eace70 is same with the state(5) to be set 00:22:59.744 [2024-07-25 15:18:51.774856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.744 [2024-07-25 15:18:51.774906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.744 [2024-07-25 15:18:51.774914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.774922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.774929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.774936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d10cb0 is same with the state(5) to be set 00:22:59.745 [2024-07-25 15:18:51.774964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.774973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.774981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.774988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.774996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eadbd0 is same with the state(5) to be set 00:22:59.745 [2024-07-25 15:18:51.775052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0250 is same with the state(5) to be set 00:22:59.745 [2024-07-25 15:18:51.775139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1e30 is same with the state(5) to be set 00:22:59.745 [2024-07-25 15:18:51.775240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d074d0 is same with the state(5) to be set 00:22:59.745 [2024-07-25 15:18:51.775325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2f80 is same with the state(5) to be set 00:22:59.745 [2024-07-25 15:18:51.775414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22270 is same with the state(5) to be set 00:22:59.745 [2024-07-25 15:18:51.775499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.745 [2024-07-25 15:18:51.775554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.775561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce45d0 is same with the state(5) to be set 00:22:59.745 [2024-07-25 15:18:51.776097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.745 [2024-07-25 15:18:51.776118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.776134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.745 [2024-07-25 15:18:51.776142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.776153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.745 [2024-07-25 15:18:51.776161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.776174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.745 [2024-07-25 15:18:51.776182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.776191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.745 [2024-07-25 15:18:51.776199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.745 [2024-07-25 15:18:51.776216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.746 [2024-07-25 15:18:51.776850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.746 [2024-07-25 15:18:51.776857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.776866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.776874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.776883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.776891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.776900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.776907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.776917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.776924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.776933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.776940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.776950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.776957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.776966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.776974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.776984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.776991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777271] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ce0420 was disconnected and freed. reset controller. 00:22:59.747 [2024-07-25 15:18:51.777396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.747 [2024-07-25 15:18:51.777669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.747 [2024-07-25 15:18:51.777679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.777900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.777908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.787909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.787947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.787960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.787969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.787979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.787987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.787997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.748 [2024-07-25 15:18:51.788300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.748 [2024-07-25 15:18:51.788308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.788576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.788655] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27b97f0 was disconnected and freed. reset controller. 00:22:59.749 [2024-07-25 15:18:51.819081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1836340 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eace70 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d10cb0 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eadbd0 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0250 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf1e30 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d074d0 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea2f80 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22270 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce45d0 (9): Bad file descriptor 00:22:59.749 [2024-07-25 15:18:51.819325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.749 [2024-07-25 15:18:51.819620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.749 [2024-07-25 15:18:51.819630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.819988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.819995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.750 [2024-07-25 15:18:51.820273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.750 [2024-07-25 15:18:51.820282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.820290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.820299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.820306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.820317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.820324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.820334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.820341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.820351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.820358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.820367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.820374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.820383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.820391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.820400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.820407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.820417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.820424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.820486] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d9ef50 was disconnected and freed. reset controller. 00:22:59.751 [2024-07-25 15:18:51.823192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.751 [2024-07-25 15:18:51.823710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.751 [2024-07-25 15:18:51.823720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.823983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.823991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.752 [2024-07-25 15:18:51.824333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.752 [2024-07-25 15:18:51.824343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d985d0 is same with the state(5) to be set 00:22:59.752 [2024-07-25 15:18:51.825180] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d985d0 was disconnected and freed. reset controller. 00:22:59.752 [2024-07-25 15:18:51.827834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:59.752 [2024-07-25 15:18:51.827872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.752 [2024-07-25 15:18:51.827946] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.752 [2024-07-25 15:18:51.827991] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.753 [2024-07-25 15:18:51.828037] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.753 [2024-07-25 15:18:51.828077] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.753 [2024-07-25 15:18:51.828403] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.753 [2024-07-25 15:18:51.828702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:59.753 [2024-07-25 15:18:51.828716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:59.753 [2024-07-25 15:18:51.829426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.753 [2024-07-25 15:18:51.829466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1836340 with addr=10.0.0.2, port=4420 00:22:59.753 [2024-07-25 15:18:51.829477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836340 is same with the state(5) to be set 00:22:59.753 [2024-07-25 15:18:51.829948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.753 [2024-07-25 15:18:51.829960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce45d0 with addr=10.0.0.2, port=4420 00:22:59.753 [2024-07-25 15:18:51.829968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce45d0 is same with the state(5) to be set 00:22:59.753 [2024-07-25 15:18:51.830355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.753 [2024-07-25 15:18:51.830851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.753 [2024-07-25 15:18:51.830859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.830869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.830876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.830885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.830893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.830903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.830910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.830920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.830927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.830937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.830945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.830954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.830961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.830970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.830980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.830990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.830997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.754 [2024-07-25 15:18:51.831485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.754 [2024-07-25 15:18:51.831493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d97120 is same with the state(5) to be set 00:22:59.754 [2024-07-25 15:18:51.831539] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d97120 was disconnected and freed. reset controller. 00:22:59.754 [2024-07-25 15:18:51.832079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.754 [2024-07-25 15:18:51.832093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eace70 with addr=10.0.0.2, port=4420 00:22:59.755 [2024-07-25 15:18:51.832101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eace70 is same with the state(5) to be set 00:22:59.755 [2024-07-25 15:18:51.832554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.755 [2024-07-25 15:18:51.832565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d10cb0 with addr=10.0.0.2, port=4420 00:22:59.755 [2024-07-25 15:18:51.832572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d10cb0 is same with the state(5) to be set 00:22:59.755 [2024-07-25 15:18:51.832585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1836340 (9): Bad file descriptor 00:22:59.755 [2024-07-25 15:18:51.832595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce45d0 (9): Bad file descriptor 00:22:59.755 [2024-07-25 15:18:51.832613] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.755 [2024-07-25 15:18:51.833956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:59.755 [2024-07-25 15:18:51.833983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eace70 (9): Bad file descriptor 00:22:59.755 [2024-07-25 15:18:51.833996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d10cb0 (9): Bad file descriptor 00:22:59.755 [2024-07-25 15:18:51.834007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:59.755 [2024-07-25 15:18:51.834015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:59.755 [2024-07-25 15:18:51.834026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:59.755 [2024-07-25 15:18:51.834040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.755 [2024-07-25 15:18:51.834048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:59.755 [2024-07-25 15:18:51.834060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.755 [2024-07-25 15:18:51.834107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.755 [2024-07-25 15:18:51.834645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.755 [2024-07-25 15:18:51.834652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.834989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.834997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.835225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.835233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0280 is same with the state(5) to be set 00:22:59.756 [2024-07-25 15:18:51.836561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.836575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.836588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.836598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.836608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.836615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.756 [2024-07-25 15:18:51.836625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.756 [2024-07-25 15:18:51.836632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.836983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.836993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.757 [2024-07-25 15:18:51.837287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.757 [2024-07-25 15:18:51.837295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.837478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.837486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.844814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.844823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da1730 is same with the state(5) to be set 00:22:59.758 [2024-07-25 15:18:51.846165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.846199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.846227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.846246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.846263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.846280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.846298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.846315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.846333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.758 [2024-07-25 15:18:51.846353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.758 [2024-07-25 15:18:51.846361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.846986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.846998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.847005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.759 [2024-07-25 15:18:51.847015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.759 [2024-07-25 15:18:51.847023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.847321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.847330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cddaf0 is same with the state(5) to be set 00:22:59.760 [2024-07-25 15:18:51.848603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.848983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.760 [2024-07-25 15:18:51.848991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.760 [2024-07-25 15:18:51.849001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.761 [2024-07-25 15:18:51.849652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.761 [2024-07-25 15:18:51.849659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.849668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.849678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.849688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.849695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.849705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.849712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.849723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.849730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.849740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.849748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.849756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdef30 is same with the state(5) to be set 00:22:59.762 [2024-07-25 15:18:51.851020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.762 [2024-07-25 15:18:51.851497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.762 [2024-07-25 15:18:51.851505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.851991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.851999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.852018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.852035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.852053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.852070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.852087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.852104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.852121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.852138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.763 [2024-07-25 15:18:51.852155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.763 [2024-07-25 15:18:51.852164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2611d70 is same with the state(5) to be set 00:22:59.763 [2024-07-25 15:18:51.853674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.764 [2024-07-25 15:18:51.853695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.764 [2024-07-25 15:18:51.853705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:59.764 [2024-07-25 15:18:51.853719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:59.764 [2024-07-25 15:18:51.853729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:59.764 [2024-07-25 15:18:51.854428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.764 [2024-07-25 15:18:51.854468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eadbd0 with addr=10.0.0.2, port=4420 00:22:59.764 [2024-07-25 15:18:51.854479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eadbd0 is same with the state(5) to be set 00:22:59.764 [2024-07-25 15:18:51.854496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:59.764 [2024-07-25 15:18:51.854503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:59.764 [2024-07-25 15:18:51.854512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:59.764 [2024-07-25 15:18:51.854530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:59.764 [2024-07-25 15:18:51.854538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:59.764 [2024-07-25 15:18:51.854545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:59.764 [2024-07-25 15:18:51.854592] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.764 [2024-07-25 15:18:51.854606] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.764 [2024-07-25 15:18:51.854622] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.764 [2024-07-25 15:18:51.854633] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.764 [2024-07-25 15:18:51.854648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eadbd0 (9): Bad file descriptor 00:22:59.764 [2024-07-25 15:18:51.854994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:59.764 task offset: 23552 on job bdev=Nvme6n1 fails 00:22:59.764 00:22:59.764 Latency(us) 00:22:59.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.764 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme1n1 ended in about 0.97 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme1n1 : 0.97 197.39 12.34 65.80 0.00 240361.39 29709.65 302339.41 00:22:59.764 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme2n1 ended in about 0.98 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme2n1 : 0.98 130.26 8.14 65.13 0.00 317537.28 32331.09 300591.79 00:22:59.764 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme3n1 ended in about 0.99 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme3n1 : 0.99 64.50 4.03 64.50 0.00 471529.81 63788.37 365253.97 00:22:59.764 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme4n1 ended in about 0.99 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme4n1 : 0.99 64.34 4.02 64.34 0.00 463051.09 58108.59 396711.25 00:22:59.764 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme5n1 ended in about 1.00 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme5n1 : 1.00 128.36 8.02 64.18 0.00 303032.32 32549.55 368749.23 00:22:59.764 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme6n1 ended in about 0.97 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme6n1 : 0.97 132.21 8.26 66.10 0.00 286864.50 32768.00 344282.45 00:22:59.764 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme7n1 ended in about 1.00 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme7n1 : 1.00 132.06 8.25 64.03 0.00 284928.91 29928.11 304087.04 00:22:59.764 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme8n1 ended in about 0.97 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme8n1 : 0.97 132.04 8.25 66.02 0.00 274276.69 33204.91 340787.20 00:22:59.764 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme9n1 : 0.98 130.59 8.16 65.30 0.00 271507.91 34297.17 342534.83 00:22:59.764 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.764 Job: Nvme10n1 ended in about 0.97 seconds with error 00:22:59.764 Verification LBA range: start 0x0 length 0x400 00:22:59.764 Nvme10n1 : 0.97 131.40 8.21 65.70 0.00 263159.61 8028.16 377487.36 00:22:59.764 =================================================================================================================== 00:22:59.764 Total : 1243.15 77.70 651.09 0.00 304596.54 8028.16 396711.25 00:22:59.764 [2024-07-25 15:18:51.879005] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:59.764 [2024-07-25 15:18:51.879034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:59.764 [2024-07-25 15:18:51.879047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.764 [2024-07-25 15:18:51.879055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.764 [2024-07-25 15:18:51.879413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.764 [2024-07-25 15:18:51.879459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0250 with addr=10.0.0.2, port=4420 00:22:59.764 [2024-07-25 15:18:51.879471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0250 is same with the state(5) to be set 00:22:59.764 [2024-07-25 15:18:51.879994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.764 [2024-07-25 15:18:51.880007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf1e30 with addr=10.0.0.2, port=4420 00:22:59.764 [2024-07-25 15:18:51.880015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1e30 is same with the state(5) to be set 00:22:59.764 [2024-07-25 15:18:51.880608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.764 [2024-07-25 15:18:51.880646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d074d0 with addr=10.0.0.2, port=4420 00:22:59.764 [2024-07-25 15:18:51.880658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d074d0 is same with the state(5) to be set 00:22:59.764 [2024-07-25 15:18:51.882292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.764 [2024-07-25 15:18:51.882310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:59.764 [2024-07-25 15:18:51.882739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.764 [2024-07-25 15:18:51.882756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22270 with addr=10.0.0.2, port=4420 00:22:59.764 [2024-07-25 15:18:51.882763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22270 is same with the state(5) to be set 00:22:59.764 [2024-07-25 15:18:51.883394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.764 [2024-07-25 15:18:51.883431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea2f80 with addr=10.0.0.2, port=4420 00:22:59.764 [2024-07-25 15:18:51.883444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2f80 is same with the state(5) to be set 00:22:59.764 [2024-07-25 15:18:51.883461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0250 (9): Bad file descriptor 00:22:59.764 [2024-07-25 15:18:51.883473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf1e30 (9): Bad file descriptor 00:22:59.764 [2024-07-25 15:18:51.883488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d074d0 (9): Bad file descriptor 00:22:59.764 [2024-07-25 15:18:51.883498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:59.764 [2024-07-25 15:18:51.883505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:59.764 [2024-07-25 15:18:51.883513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:59.764 [2024-07-25 15:18:51.883572] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.764 [2024-07-25 15:18:51.883588] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.764 [2024-07-25 15:18:51.883600] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.764 [2024-07-25 15:18:51.883611] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.764 [2024-07-25 15:18:51.883696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.764 [2024-07-25 15:18:51.883982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.764 [2024-07-25 15:18:51.884002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce45d0 with addr=10.0.0.2, port=4420 00:22:59.764 [2024-07-25 15:18:51.884011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce45d0 is same with the state(5) to be set 00:22:59.764 [2024-07-25 15:18:51.884582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.764 [2024-07-25 15:18:51.884621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1836340 with addr=10.0.0.2, port=4420 00:22:59.764 [2024-07-25 15:18:51.884632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836340 is same with the state(5) to be set 00:22:59.764 [2024-07-25 15:18:51.884646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22270 (9): Bad file descriptor 00:22:59.765 [2024-07-25 15:18:51.884657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea2f80 (9): Bad file descriptor 00:22:59.765 [2024-07-25 15:18:51.884666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:59.765 [2024-07-25 15:18:51.884673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:59.765 [2024-07-25 15:18:51.884681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:59.765 [2024-07-25 15:18:51.884695] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:59.765 [2024-07-25 15:18:51.884702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:59.765 [2024-07-25 15:18:51.884709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:59.765 [2024-07-25 15:18:51.884721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:59.765 [2024-07-25 15:18:51.884727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:59.765 [2024-07-25 15:18:51.884735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:59.765 [2024-07-25 15:18:51.884810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:59.765 [2024-07-25 15:18:51.884823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:59.765 [2024-07-25 15:18:51.884832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.765 [2024-07-25 15:18:51.884839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.765 [2024-07-25 15:18:51.884849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.765 [2024-07-25 15:18:51.884872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce45d0 (9): Bad file descriptor 00:22:59.765 [2024-07-25 15:18:51.884883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1836340 (9): Bad file descriptor 00:22:59.765 [2024-07-25 15:18:51.884892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:59.765 [2024-07-25 15:18:51.884898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:59.765 [2024-07-25 15:18:51.884905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:59.765 [2024-07-25 15:18:51.884914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:59.765 [2024-07-25 15:18:51.884921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:59.765 [2024-07-25 15:18:51.884929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:59.765 [2024-07-25 15:18:51.884960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.765 [2024-07-25 15:18:51.884968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.765 [2024-07-25 15:18:51.885557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.765 [2024-07-25 15:18:51.885594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d10cb0 with addr=10.0.0.2, port=4420 00:22:59.765 [2024-07-25 15:18:51.885605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d10cb0 is same with the state(5) to be set 00:22:59.765 [2024-07-25 15:18:51.886118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.765 [2024-07-25 15:18:51.886132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eace70 with addr=10.0.0.2, port=4420 00:22:59.765 [2024-07-25 15:18:51.886140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eace70 is same with the state(5) to be set 00:22:59.765 [2024-07-25 15:18:51.886148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.765 [2024-07-25 15:18:51.886154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:59.765 [2024-07-25 15:18:51.886162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.765 [2024-07-25 15:18:51.886176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:59.765 [2024-07-25 15:18:51.886182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:59.765 [2024-07-25 15:18:51.886189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:59.765 [2024-07-25 15:18:51.886249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.765 [2024-07-25 15:18:51.886258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.765 [2024-07-25 15:18:51.886268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d10cb0 (9): Bad file descriptor 00:22:59.765 [2024-07-25 15:18:51.886278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eace70 (9): Bad file descriptor 00:22:59.765 [2024-07-25 15:18:51.886305] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:59.765 [2024-07-25 15:18:51.886313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:59.765 [2024-07-25 15:18:51.886320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:59.765 [2024-07-25 15:18:51.886334] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:59.765 [2024-07-25 15:18:51.886340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:59.765 [2024-07-25 15:18:51.886347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:59.765 [2024-07-25 15:18:51.886388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.765 [2024-07-25 15:18:51.886396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.027 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:00.027 15:18:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:00.970 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 323738 00:23:00.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (323738) - No such process 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:00.971 rmmod nvme_tcp 00:23:00.971 rmmod nvme_fabrics 00:23:00.971 rmmod nvme_keyring 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.971 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.532 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:03.532 00:23:03.532 real 0m7.828s 00:23:03.532 user 0m19.110s 00:23:03.532 sys 0m1.237s 00:23:03.532 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.532 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.532 ************************************ 00:23:03.532 END TEST nvmf_shutdown_tc3 00:23:03.532 ************************************ 00:23:03.532 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:03.532 00:23:03.532 real 0m32.736s 00:23:03.532 user 1m17.022s 00:23:03.532 sys 0m9.525s 00:23:03.532 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.532 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:03.532 ************************************ 00:23:03.532 END TEST nvmf_shutdown 00:23:03.532 ************************************ 00:23:03.532 15:18:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:23:03.532 00:23:03.532 real 11m32.636s 00:23:03.532 user 24m41.020s 00:23:03.532 sys 3m24.567s 00:23:03.532 15:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.532 15:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:03.532 ************************************ 00:23:03.532 END TEST nvmf_target_extra 00:23:03.532 ************************************ 00:23:03.532 15:18:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:03.532 15:18:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:03.532 15:18:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.532 15:18:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:03.532 ************************************ 00:23:03.532 START TEST nvmf_host 00:23:03.532 ************************************ 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:03.532 * Looking for test storage... 00:23:03.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.532 ************************************ 00:23:03.532 START TEST nvmf_multicontroller 00:23:03.532 ************************************ 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:03.532 * Looking for test storage... 00:23:03.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.532 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:03.533 15:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:11.682 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:11.682 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:11.682 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:11.682 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:11.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:23:11.682 00:23:11.682 --- 10.0.0.2 ping statistics --- 00:23:11.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.682 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:23:11.682 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:23:11.682 00:23:11.683 --- 10.0.0.1 ping statistics --- 00:23:11.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.683 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=328827 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 328827 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 328827 ']' 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.683 15:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 [2024-07-25 15:19:02.922968] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:11.683 [2024-07-25 15:19:02.923025] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.683 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.683 [2024-07-25 15:19:03.008894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:11.683 [2024-07-25 15:19:03.102365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.683 [2024-07-25 15:19:03.102428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.683 [2024-07-25 15:19:03.102437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.683 [2024-07-25 15:19:03.102444] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.683 [2024-07-25 15:19:03.102450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.683 [2024-07-25 15:19:03.102585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.683 [2024-07-25 15:19:03.102751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.683 [2024-07-25 15:19:03.102752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 [2024-07-25 15:19:03.751588] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 Malloc0 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 [2024-07-25 15:19:03.817301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 [2024-07-25 15:19:03.829232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 Malloc1 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.683 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=328990 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.944 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:11.945 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 328990 /var/tmp/bdevperf.sock 00:23:11.945 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 328990 ']' 00:23:11.945 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.945 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.945 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.945 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.945 15:19:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.888 NVMe0n1 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.888 1 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.888 request: 00:23:12.888 { 00:23:12.888 "name": "NVMe0", 00:23:12.888 "trtype": "tcp", 00:23:12.888 "traddr": "10.0.0.2", 00:23:12.888 "adrfam": "ipv4", 00:23:12.888 "trsvcid": "4420", 00:23:12.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.888 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:12.888 "hostaddr": "10.0.0.2", 00:23:12.888 "hostsvcid": "60000", 00:23:12.888 "prchk_reftag": false, 00:23:12.888 "prchk_guard": false, 00:23:12.888 "hdgst": false, 00:23:12.888 "ddgst": false, 00:23:12.888 "method": "bdev_nvme_attach_controller", 00:23:12.888 "req_id": 1 00:23:12.888 } 00:23:12.888 Got JSON-RPC error response 00:23:12.888 response: 00:23:12.888 { 00:23:12.888 "code": -114, 00:23:12.888 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:12.888 } 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.888 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.889 request: 00:23:12.889 { 00:23:12.889 "name": "NVMe0", 00:23:12.889 "trtype": "tcp", 00:23:12.889 "traddr": "10.0.0.2", 00:23:12.889 "adrfam": "ipv4", 00:23:12.889 "trsvcid": "4420", 00:23:12.889 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:12.889 "hostaddr": "10.0.0.2", 00:23:12.889 "hostsvcid": "60000", 00:23:12.889 "prchk_reftag": false, 00:23:12.889 "prchk_guard": false, 00:23:12.889 "hdgst": false, 00:23:12.889 "ddgst": false, 00:23:12.889 "method": "bdev_nvme_attach_controller", 00:23:12.889 "req_id": 1 00:23:12.889 } 00:23:12.889 Got JSON-RPC error response 00:23:12.889 response: 00:23:12.889 { 00:23:12.889 "code": -114, 00:23:12.889 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:12.889 } 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.889 request: 00:23:12.889 { 00:23:12.889 "name": "NVMe0", 00:23:12.889 "trtype": "tcp", 00:23:12.889 "traddr": "10.0.0.2", 00:23:12.889 "adrfam": "ipv4", 00:23:12.889 "trsvcid": "4420", 00:23:12.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.889 "hostaddr": "10.0.0.2", 00:23:12.889 "hostsvcid": "60000", 00:23:12.889 "prchk_reftag": false, 00:23:12.889 "prchk_guard": false, 00:23:12.889 "hdgst": false, 00:23:12.889 "ddgst": false, 00:23:12.889 "multipath": "disable", 00:23:12.889 "method": "bdev_nvme_attach_controller", 00:23:12.889 "req_id": 1 00:23:12.889 } 00:23:12.889 Got JSON-RPC error response 00:23:12.889 response: 00:23:12.889 { 00:23:12.889 "code": -114, 00:23:12.889 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:12.889 } 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.889 request: 00:23:12.889 { 00:23:12.889 "name": "NVMe0", 00:23:12.889 "trtype": "tcp", 00:23:12.889 "traddr": "10.0.0.2", 00:23:12.889 "adrfam": "ipv4", 00:23:12.889 "trsvcid": "4420", 00:23:12.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.889 "hostaddr": "10.0.0.2", 00:23:12.889 "hostsvcid": "60000", 00:23:12.889 "prchk_reftag": false, 00:23:12.889 "prchk_guard": false, 00:23:12.889 "hdgst": false, 00:23:12.889 "ddgst": false, 00:23:12.889 "multipath": "failover", 00:23:12.889 "method": "bdev_nvme_attach_controller", 00:23:12.889 "req_id": 1 00:23:12.889 } 00:23:12.889 Got JSON-RPC error response 00:23:12.889 response: 00:23:12.889 { 00:23:12.889 "code": -114, 00:23:12.889 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:12.889 } 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.889 15:19:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.150 00:23:13.150 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.150 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:13.150 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.150 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.150 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.150 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:13.150 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.150 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.150 00:23:13.150 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:13.151 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:13.151 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:13.151 15:19:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.535 0 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 328990 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 328990 ']' 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 328990 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 328990 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 328990' 00:23:14.535 killing process with pid 328990 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 328990 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 328990 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:14.535 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:14.535 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:14.535 [2024-07-25 15:19:03.948672] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:14.536 [2024-07-25 15:19:03.948729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328990 ] 00:23:14.536 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.536 [2024-07-25 15:19:04.008101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.536 [2024-07-25 15:19:04.072376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.536 [2024-07-25 15:19:05.259299] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 1fafbcf2-4c5e-4c76-aa79-2b6deacd7d2d already exists 00:23:14.536 [2024-07-25 15:19:05.259329] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:1fafbcf2-4c5e-4c76-aa79-2b6deacd7d2d alias for bdev NVMe1n1 00:23:14.536 [2024-07-25 15:19:05.259338] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:14.536 Running I/O for 1 seconds... 00:23:14.536 00:23:14.536 Latency(us) 00:23:14.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.536 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:14.536 NVMe0n1 : 1.00 27890.09 108.95 0.00 0.00 4574.71 4014.08 21736.11 00:23:14.536 =================================================================================================================== 00:23:14.536 Total : 27890.09 108.95 0.00 0.00 4574.71 4014.08 21736.11 00:23:14.536 Received shutdown signal, test time was about 1.000000 seconds 00:23:14.536 00:23:14.536 Latency(us) 00:23:14.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.536 =================================================================================================================== 00:23:14.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.536 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.536 rmmod nvme_tcp 00:23:14.536 rmmod nvme_fabrics 00:23:14.536 rmmod nvme_keyring 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 328827 ']' 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 328827 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 328827 ']' 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 328827 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.536 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 328827 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 328827' 00:23:14.797 killing process with pid 328827 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 328827 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 328827 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.797 15:19:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.343 15:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.343 00:23:17.343 real 0m13.446s 00:23:17.343 user 0m16.400s 00:23:17.343 sys 0m6.097s 00:23:17.343 15:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:17.343 15:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.343 ************************************ 00:23:17.343 END TEST nvmf_multicontroller 00:23:17.343 ************************************ 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.343 ************************************ 00:23:17.343 START TEST nvmf_aer 00:23:17.343 ************************************ 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:17.343 * Looking for test storage... 00:23:17.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.343 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.344 15:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:23.939 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:23.939 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:23.939 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:23.939 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.939 15:19:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.939 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.939 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.939 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:23:24.201 00:23:24.201 --- 10.0.0.2 ping statistics --- 00:23:24.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.201 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:23:24.201 00:23:24.201 --- 10.0.0.1 ping statistics --- 00:23:24.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.201 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=333660 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 333660 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 333660 ']' 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.201 15:19:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.463 [2024-07-25 15:19:16.422010] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:24.463 [2024-07-25 15:19:16.422079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.463 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.463 [2024-07-25 15:19:16.494672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.463 [2024-07-25 15:19:16.570721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.463 [2024-07-25 15:19:16.570760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.463 [2024-07-25 15:19:16.570768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.463 [2024-07-25 15:19:16.570774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.463 [2024-07-25 15:19:16.570780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.463 [2024-07-25 15:19:16.570916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.463 [2024-07-25 15:19:16.571039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.463 [2024-07-25 15:19:16.571199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.463 [2024-07-25 15:19:16.571214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.035 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.035 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:25.035 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.035 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:25.035 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.297 [2024-07-25 15:19:17.256154] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.297 Malloc0 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.297 [2024-07-25 15:19:17.315522] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.297 [ 00:23:25.297 { 00:23:25.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:25.297 "subtype": "Discovery", 00:23:25.297 "listen_addresses": [], 00:23:25.297 "allow_any_host": true, 00:23:25.297 "hosts": [] 00:23:25.297 }, 00:23:25.297 { 00:23:25.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.297 "subtype": "NVMe", 00:23:25.297 "listen_addresses": [ 00:23:25.297 { 00:23:25.297 "trtype": "TCP", 00:23:25.297 "adrfam": "IPv4", 00:23:25.297 "traddr": "10.0.0.2", 00:23:25.297 "trsvcid": "4420" 00:23:25.297 } 00:23:25.297 ], 00:23:25.297 "allow_any_host": true, 00:23:25.297 "hosts": [], 00:23:25.297 "serial_number": "SPDK00000000000001", 00:23:25.297 "model_number": "SPDK bdev Controller", 00:23:25.297 "max_namespaces": 2, 00:23:25.297 "min_cntlid": 1, 00:23:25.297 "max_cntlid": 65519, 00:23:25.297 "namespaces": [ 00:23:25.297 { 00:23:25.297 "nsid": 1, 00:23:25.297 "bdev_name": "Malloc0", 00:23:25.297 "name": "Malloc0", 00:23:25.297 "nguid": "8C299ABF4E174654A431D6271E88A319", 00:23:25.297 "uuid": "8c299abf-4e17-4654-a431-d6271e88a319" 00:23:25.297 } 00:23:25.297 ] 00:23:25.297 } 00:23:25.297 ] 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=333886 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:25.297 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:25.297 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 Malloc1 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 [ 00:23:25.558 { 00:23:25.558 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:25.558 "subtype": "Discovery", 00:23:25.558 "listen_addresses": [], 00:23:25.558 "allow_any_host": true, 00:23:25.558 "hosts": [] 00:23:25.558 }, 00:23:25.558 { 00:23:25.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.558 "subtype": "NVMe", 00:23:25.558 "listen_addresses": [ 00:23:25.558 { 00:23:25.558 "trtype": "TCP", 00:23:25.558 "adrfam": "IPv4", 00:23:25.558 "traddr": "10.0.0.2", 00:23:25.558 "trsvcid": "4420" 00:23:25.558 } 00:23:25.558 ], 00:23:25.558 "allow_any_host": true, 00:23:25.558 "hosts": [], 00:23:25.558 "serial_number": "SPDK00000000000001", 00:23:25.558 "model_number": "SPDK bdev Controller", 00:23:25.558 "max_namespaces": 2, 00:23:25.558 "min_cntlid": 1, 00:23:25.558 "max_cntlid": 65519, 00:23:25.558 "namespaces": [ 00:23:25.558 { 00:23:25.558 "nsid": 1, 00:23:25.558 "bdev_name": "Malloc0", 00:23:25.558 "name": "Malloc0", 00:23:25.558 "nguid": "8C299ABF4E174654A431D6271E88A319", 00:23:25.558 "uuid": "8c299abf-4e17-4654-a431-d6271e88a319" 00:23:25.558 }, 00:23:25.558 { 00:23:25.558 "nsid": 2, 00:23:25.558 "bdev_name": "Malloc1", 00:23:25.558 "name": "Malloc1", 00:23:25.558 "nguid": "A9065ABDE6F04CD681F39D82FD8F112A", 00:23:25.558 "uuid": "a9065abd-e6f0-4cd6-81f3-9d82fd8f112a" 00:23:25.558 } 00:23:25.558 ] 00:23:25.558 } 00:23:25.558 ] 00:23:25.558 Asynchronous Event Request test 00:23:25.558 Attaching to 10.0.0.2 00:23:25.558 Attached to 10.0.0.2 00:23:25.558 Registering asynchronous event callbacks... 00:23:25.558 Starting namespace attribute notice tests for all controllers... 00:23:25.558 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:25.558 aer_cb - Changed Namespace 00:23:25.558 Cleaning up... 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 333886 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:25.558 rmmod nvme_tcp 00:23:25.558 rmmod nvme_fabrics 00:23:25.558 rmmod nvme_keyring 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 333660 ']' 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 333660 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 333660 ']' 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 333660 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:25.558 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 333660 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 333660' 00:23:25.819 killing process with pid 333660 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 333660 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 333660 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.819 15:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.369 15:19:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:28.369 00:23:28.369 real 0m10.952s 00:23:28.369 user 0m7.450s 00:23:28.369 sys 0m5.795s 00:23:28.369 15:19:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.369 15:19:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.369 ************************************ 00:23:28.369 END TEST nvmf_aer 00:23:28.369 ************************************ 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.369 ************************************ 00:23:28.369 START TEST nvmf_async_init 00:23:28.369 ************************************ 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:28.369 * Looking for test storage... 00:23:28.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=41c6ecd8fbd74505b6e5a4146c0c039b 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:28.369 15:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:34.970 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:34.970 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.970 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:34.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:34.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.971 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:23:35.233 00:23:35.233 --- 10.0.0.2 ping statistics --- 00:23:35.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.233 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:23:35.233 00:23:35.233 --- 10.0.0.1 ping statistics --- 00:23:35.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.233 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.233 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.494 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=338114 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 338114 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 338114 ']' 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.495 15:19:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.495 [2024-07-25 15:19:27.502495] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:35.495 [2024-07-25 15:19:27.502585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.495 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.495 [2024-07-25 15:19:27.576459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.495 [2024-07-25 15:19:27.649895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.495 [2024-07-25 15:19:27.649930] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.495 [2024-07-25 15:19:27.649938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.495 [2024-07-25 15:19:27.649945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.495 [2024-07-25 15:19:27.649950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.495 [2024-07-25 15:19:27.649969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.438 [2024-07-25 15:19:28.312914] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.438 null0 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.438 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 41c6ecd8fbd74505b6e5a4146c0c039b 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.439 [2024-07-25 15:19:28.373174] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.439 nvme0n1 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.439 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.700 [ 00:23:36.700 { 00:23:36.700 "name": "nvme0n1", 00:23:36.700 "aliases": [ 00:23:36.700 "41c6ecd8-fbd7-4505-b6e5-a4146c0c039b" 00:23:36.700 ], 00:23:36.700 "product_name": "NVMe disk", 00:23:36.700 "block_size": 512, 00:23:36.700 "num_blocks": 2097152, 00:23:36.700 "uuid": "41c6ecd8-fbd7-4505-b6e5-a4146c0c039b", 00:23:36.700 "assigned_rate_limits": { 00:23:36.700 "rw_ios_per_sec": 0, 00:23:36.700 "rw_mbytes_per_sec": 0, 00:23:36.700 "r_mbytes_per_sec": 0, 00:23:36.700 "w_mbytes_per_sec": 0 00:23:36.700 }, 00:23:36.700 "claimed": false, 00:23:36.700 "zoned": false, 00:23:36.700 "supported_io_types": { 00:23:36.700 "read": true, 00:23:36.700 "write": true, 00:23:36.700 "unmap": false, 00:23:36.700 "flush": true, 00:23:36.700 "reset": true, 00:23:36.700 "nvme_admin": true, 00:23:36.700 "nvme_io": true, 00:23:36.700 "nvme_io_md": false, 00:23:36.700 "write_zeroes": true, 00:23:36.700 "zcopy": false, 00:23:36.700 "get_zone_info": false, 00:23:36.700 "zone_management": false, 00:23:36.700 "zone_append": false, 00:23:36.700 "compare": true, 00:23:36.700 "compare_and_write": true, 00:23:36.700 "abort": true, 00:23:36.700 "seek_hole": false, 00:23:36.700 "seek_data": false, 00:23:36.700 "copy": true, 00:23:36.700 "nvme_iov_md": false 00:23:36.700 }, 00:23:36.701 "memory_domains": [ 00:23:36.701 { 00:23:36.701 "dma_device_id": "system", 00:23:36.701 "dma_device_type": 1 00:23:36.701 } 00:23:36.701 ], 00:23:36.701 "driver_specific": { 00:23:36.701 "nvme": [ 00:23:36.701 { 00:23:36.701 "trid": { 00:23:36.701 "trtype": "TCP", 00:23:36.701 "adrfam": "IPv4", 00:23:36.701 "traddr": "10.0.0.2", 00:23:36.701 "trsvcid": "4420", 00:23:36.701 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.701 }, 00:23:36.701 "ctrlr_data": { 00:23:36.701 "cntlid": 1, 00:23:36.701 "vendor_id": "0x8086", 00:23:36.701 "model_number": "SPDK bdev Controller", 00:23:36.701 "serial_number": "00000000000000000000", 00:23:36.701 "firmware_revision": "24.09", 00:23:36.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.701 "oacs": { 00:23:36.701 "security": 0, 00:23:36.701 "format": 0, 00:23:36.701 "firmware": 0, 00:23:36.701 "ns_manage": 0 00:23:36.701 }, 00:23:36.701 "multi_ctrlr": true, 00:23:36.701 "ana_reporting": false 00:23:36.701 }, 00:23:36.701 "vs": { 00:23:36.701 "nvme_version": "1.3" 00:23:36.701 }, 00:23:36.701 "ns_data": { 00:23:36.701 "id": 1, 00:23:36.701 "can_share": true 00:23:36.701 } 00:23:36.701 } 00:23:36.701 ], 00:23:36.701 "mp_policy": "active_passive" 00:23:36.701 } 00:23:36.701 } 00:23:36.701 ] 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 [2024-07-25 15:19:28.645723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.701 [2024-07-25 15:19:28.645783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831f40 (9): Bad file descriptor 00:23:36.701 [2024-07-25 15:19:28.777298] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 [ 00:23:36.701 { 00:23:36.701 "name": "nvme0n1", 00:23:36.701 "aliases": [ 00:23:36.701 "41c6ecd8-fbd7-4505-b6e5-a4146c0c039b" 00:23:36.701 ], 00:23:36.701 "product_name": "NVMe disk", 00:23:36.701 "block_size": 512, 00:23:36.701 "num_blocks": 2097152, 00:23:36.701 "uuid": "41c6ecd8-fbd7-4505-b6e5-a4146c0c039b", 00:23:36.701 "assigned_rate_limits": { 00:23:36.701 "rw_ios_per_sec": 0, 00:23:36.701 "rw_mbytes_per_sec": 0, 00:23:36.701 "r_mbytes_per_sec": 0, 00:23:36.701 "w_mbytes_per_sec": 0 00:23:36.701 }, 00:23:36.701 "claimed": false, 00:23:36.701 "zoned": false, 00:23:36.701 "supported_io_types": { 00:23:36.701 "read": true, 00:23:36.701 "write": true, 00:23:36.701 "unmap": false, 00:23:36.701 "flush": true, 00:23:36.701 "reset": true, 00:23:36.701 "nvme_admin": true, 00:23:36.701 "nvme_io": true, 00:23:36.701 "nvme_io_md": false, 00:23:36.701 "write_zeroes": true, 00:23:36.701 "zcopy": false, 00:23:36.701 "get_zone_info": false, 00:23:36.701 "zone_management": false, 00:23:36.701 "zone_append": false, 00:23:36.701 "compare": true, 00:23:36.701 "compare_and_write": true, 00:23:36.701 "abort": true, 00:23:36.701 "seek_hole": false, 00:23:36.701 "seek_data": false, 00:23:36.701 "copy": true, 00:23:36.701 "nvme_iov_md": false 00:23:36.701 }, 00:23:36.701 "memory_domains": [ 00:23:36.701 { 00:23:36.701 "dma_device_id": "system", 00:23:36.701 "dma_device_type": 1 00:23:36.701 } 00:23:36.701 ], 00:23:36.701 "driver_specific": { 00:23:36.701 "nvme": [ 00:23:36.701 { 00:23:36.701 "trid": { 00:23:36.701 "trtype": "TCP", 00:23:36.701 "adrfam": "IPv4", 00:23:36.701 "traddr": "10.0.0.2", 00:23:36.701 "trsvcid": "4420", 00:23:36.701 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.701 }, 00:23:36.701 "ctrlr_data": { 00:23:36.701 "cntlid": 2, 00:23:36.701 "vendor_id": "0x8086", 00:23:36.701 "model_number": "SPDK bdev Controller", 00:23:36.701 "serial_number": "00000000000000000000", 00:23:36.701 "firmware_revision": "24.09", 00:23:36.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.701 "oacs": { 00:23:36.701 "security": 0, 00:23:36.701 "format": 0, 00:23:36.701 "firmware": 0, 00:23:36.701 "ns_manage": 0 00:23:36.701 }, 00:23:36.701 "multi_ctrlr": true, 00:23:36.701 "ana_reporting": false 00:23:36.701 }, 00:23:36.701 "vs": { 00:23:36.701 "nvme_version": "1.3" 00:23:36.701 }, 00:23:36.701 "ns_data": { 00:23:36.701 "id": 1, 00:23:36.701 "can_share": true 00:23:36.701 } 00:23:36.701 } 00:23:36.701 ], 00:23:36.701 "mp_policy": "active_passive" 00:23:36.701 } 00:23:36.701 } 00:23:36.701 ] 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3ExXRRrl05 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3ExXRRrl05 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 [2024-07-25 15:19:28.850378] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.701 [2024-07-25 15:19:28.850499] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3ExXRRrl05 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 [2024-07-25 15:19:28.862399] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3ExXRRrl05 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.701 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.701 [2024-07-25 15:19:28.874448] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.701 [2024-07-25 15:19:28.874487] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:36.963 nvme0n1 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.963 [ 00:23:36.963 { 00:23:36.963 "name": "nvme0n1", 00:23:36.963 "aliases": [ 00:23:36.963 "41c6ecd8-fbd7-4505-b6e5-a4146c0c039b" 00:23:36.963 ], 00:23:36.963 "product_name": "NVMe disk", 00:23:36.963 "block_size": 512, 00:23:36.963 "num_blocks": 2097152, 00:23:36.963 "uuid": "41c6ecd8-fbd7-4505-b6e5-a4146c0c039b", 00:23:36.963 "assigned_rate_limits": { 00:23:36.963 "rw_ios_per_sec": 0, 00:23:36.963 "rw_mbytes_per_sec": 0, 00:23:36.963 "r_mbytes_per_sec": 0, 00:23:36.963 "w_mbytes_per_sec": 0 00:23:36.963 }, 00:23:36.963 "claimed": false, 00:23:36.963 "zoned": false, 00:23:36.963 "supported_io_types": { 00:23:36.963 "read": true, 00:23:36.963 "write": true, 00:23:36.963 "unmap": false, 00:23:36.963 "flush": true, 00:23:36.963 "reset": true, 00:23:36.963 "nvme_admin": true, 00:23:36.963 "nvme_io": true, 00:23:36.963 "nvme_io_md": false, 00:23:36.963 "write_zeroes": true, 00:23:36.963 "zcopy": false, 00:23:36.963 "get_zone_info": false, 00:23:36.963 "zone_management": false, 00:23:36.963 "zone_append": false, 00:23:36.963 "compare": true, 00:23:36.963 "compare_and_write": true, 00:23:36.963 "abort": true, 00:23:36.963 "seek_hole": false, 00:23:36.963 "seek_data": false, 00:23:36.963 "copy": true, 00:23:36.963 "nvme_iov_md": false 00:23:36.963 }, 00:23:36.963 "memory_domains": [ 00:23:36.963 { 00:23:36.963 "dma_device_id": "system", 00:23:36.963 "dma_device_type": 1 00:23:36.963 } 00:23:36.963 ], 00:23:36.963 "driver_specific": { 00:23:36.963 "nvme": [ 00:23:36.963 { 00:23:36.963 "trid": { 00:23:36.963 "trtype": "TCP", 00:23:36.963 "adrfam": "IPv4", 00:23:36.963 "traddr": "10.0.0.2", 00:23:36.963 "trsvcid": "4421", 00:23:36.963 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.963 }, 00:23:36.963 "ctrlr_data": { 00:23:36.963 "cntlid": 3, 00:23:36.963 "vendor_id": "0x8086", 00:23:36.963 "model_number": "SPDK bdev Controller", 00:23:36.963 "serial_number": "00000000000000000000", 00:23:36.963 "firmware_revision": "24.09", 00:23:36.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.963 "oacs": { 00:23:36.963 "security": 0, 00:23:36.963 "format": 0, 00:23:36.963 "firmware": 0, 00:23:36.963 "ns_manage": 0 00:23:36.963 }, 00:23:36.963 "multi_ctrlr": true, 00:23:36.963 "ana_reporting": false 00:23:36.963 }, 00:23:36.963 "vs": { 00:23:36.963 "nvme_version": "1.3" 00:23:36.963 }, 00:23:36.963 "ns_data": { 00:23:36.963 "id": 1, 00:23:36.963 "can_share": true 00:23:36.963 } 00:23:36.963 } 00:23:36.963 ], 00:23:36.963 "mp_policy": "active_passive" 00:23:36.963 } 00:23:36.963 } 00:23:36.963 ] 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.3ExXRRrl05 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.963 15:19:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.963 rmmod nvme_tcp 00:23:36.963 rmmod nvme_fabrics 00:23:36.963 rmmod nvme_keyring 00:23:36.963 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.963 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:36.963 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:36.963 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 338114 ']' 00:23:36.963 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 338114 00:23:36.963 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 338114 ']' 00:23:36.963 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 338114 00:23:36.963 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:36.964 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:36.964 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 338114 00:23:36.964 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:36.964 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:36.964 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 338114' 00:23:36.964 killing process with pid 338114 00:23:36.964 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 338114 00:23:36.964 [2024-07-25 15:19:29.116166] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:36.964 [2024-07-25 15:19:29.116193] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:36.964 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 338114 00:23:37.226 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.226 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.226 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.226 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.226 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.226 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.226 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.226 15:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.141 15:19:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.141 00:23:39.141 real 0m11.234s 00:23:39.141 user 0m3.873s 00:23:39.141 sys 0m5.831s 00:23:39.141 15:19:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.141 15:19:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.141 ************************************ 00:23:39.141 END TEST nvmf_async_init 00:23:39.141 ************************************ 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.402 ************************************ 00:23:39.402 START TEST dma 00:23:39.402 ************************************ 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:39.402 * Looking for test storage... 00:23:39.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.402 15:19:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:39.403 00:23:39.403 real 0m0.131s 00:23:39.403 user 0m0.071s 00:23:39.403 sys 0m0.068s 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:39.403 ************************************ 00:23:39.403 END TEST dma 00:23:39.403 ************************************ 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.403 15:19:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.665 ************************************ 00:23:39.665 START TEST nvmf_identify 00:23:39.665 ************************************ 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:39.665 * Looking for test storage... 00:23:39.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.665 15:19:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:47.816 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:47.816 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:47.816 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:47.816 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:47.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:23:47.816 00:23:47.816 --- 10.0.0.2 ping statistics --- 00:23:47.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.816 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:23:47.816 00:23:47.816 --- 10.0.0.1 ping statistics --- 00:23:47.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.816 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:47.816 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=342595 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 342595 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 342595 ']' 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:47.817 15:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 [2024-07-25 15:19:38.931324] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:47.817 [2024-07-25 15:19:38.931372] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.817 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.817 [2024-07-25 15:19:38.997023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.817 [2024-07-25 15:19:39.063339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.817 [2024-07-25 15:19:39.063391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.817 [2024-07-25 15:19:39.063399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.817 [2024-07-25 15:19:39.063406] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.817 [2024-07-25 15:19:39.063412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.817 [2024-07-25 15:19:39.063552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.817 [2024-07-25 15:19:39.063649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.817 [2024-07-25 15:19:39.063802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.817 [2024-07-25 15:19:39.063804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 [2024-07-25 15:19:39.706028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 Malloc0 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 [2024-07-25 15:19:39.805454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.817 [ 00:23:47.817 { 00:23:47.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:47.817 "subtype": "Discovery", 00:23:47.817 "listen_addresses": [ 00:23:47.817 { 00:23:47.817 "trtype": "TCP", 00:23:47.817 "adrfam": "IPv4", 00:23:47.817 "traddr": "10.0.0.2", 00:23:47.817 "trsvcid": "4420" 00:23:47.817 } 00:23:47.817 ], 00:23:47.817 "allow_any_host": true, 00:23:47.817 "hosts": [] 00:23:47.817 }, 00:23:47.817 { 00:23:47.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.817 "subtype": "NVMe", 00:23:47.817 "listen_addresses": [ 00:23:47.817 { 00:23:47.817 "trtype": "TCP", 00:23:47.817 "adrfam": "IPv4", 00:23:47.817 "traddr": "10.0.0.2", 00:23:47.817 "trsvcid": "4420" 00:23:47.817 } 00:23:47.817 ], 00:23:47.817 "allow_any_host": true, 00:23:47.817 "hosts": [], 00:23:47.817 "serial_number": "SPDK00000000000001", 00:23:47.817 "model_number": "SPDK bdev Controller", 00:23:47.817 "max_namespaces": 32, 00:23:47.817 "min_cntlid": 1, 00:23:47.817 "max_cntlid": 65519, 00:23:47.817 "namespaces": [ 00:23:47.817 { 00:23:47.817 "nsid": 1, 00:23:47.817 "bdev_name": "Malloc0", 00:23:47.817 "name": "Malloc0", 00:23:47.817 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:47.817 "eui64": "ABCDEF0123456789", 00:23:47.817 "uuid": "3bf21805-52a6-456e-9b70-c8c18a4deaf9" 00:23:47.817 } 00:23:47.817 ] 00:23:47.817 } 00:23:47.817 ] 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.817 15:19:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:47.817 [2024-07-25 15:19:39.866291] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:47.817 [2024-07-25 15:19:39.866332] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342830 ] 00:23:47.817 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.817 [2024-07-25 15:19:39.897888] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:47.817 [2024-07-25 15:19:39.897935] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:47.817 [2024-07-25 15:19:39.897941] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:47.817 [2024-07-25 15:19:39.897953] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:47.817 [2024-07-25 15:19:39.897962] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:47.817 [2024-07-25 15:19:39.901234] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:47.817 [2024-07-25 15:19:39.901263] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13a2ec0 0 00:23:47.817 [2024-07-25 15:19:39.909212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:47.817 [2024-07-25 15:19:39.909230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:47.817 [2024-07-25 15:19:39.909235] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:47.818 [2024-07-25 15:19:39.909238] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:47.818 [2024-07-25 15:19:39.909277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.909283] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.909288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.818 [2024-07-25 15:19:39.909303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:47.818 [2024-07-25 15:19:39.909321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.818 [2024-07-25 15:19:39.917214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.818 [2024-07-25 15:19:39.917223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.818 [2024-07-25 15:19:39.917227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.917232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:47.818 [2024-07-25 15:19:39.917241] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:47.818 [2024-07-25 15:19:39.917248] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:47.818 [2024-07-25 15:19:39.917253] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:47.818 [2024-07-25 15:19:39.917267] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.917271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.917274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.818 [2024-07-25 15:19:39.917282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.818 [2024-07-25 15:19:39.917295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.818 [2024-07-25 15:19:39.917547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.818 [2024-07-25 15:19:39.917556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.818 [2024-07-25 15:19:39.917560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.917564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:47.818 [2024-07-25 15:19:39.917573] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:47.818 [2024-07-25 15:19:39.917584] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:47.818 [2024-07-25 15:19:39.917592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.917596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.917600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.818 [2024-07-25 15:19:39.917608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.818 [2024-07-25 15:19:39.917620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.818 [2024-07-25 15:19:39.917787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.818 [2024-07-25 15:19:39.917794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.818 [2024-07-25 15:19:39.917797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.917801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:47.818 [2024-07-25 15:19:39.917806] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:47.818 [2024-07-25 15:19:39.917814] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:47.818 [2024-07-25 15:19:39.917821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.917824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.917828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.818 [2024-07-25 15:19:39.917835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.818 [2024-07-25 15:19:39.917846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.818 [2024-07-25 15:19:39.918018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.818 [2024-07-25 15:19:39.918025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.818 [2024-07-25 15:19:39.918029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:47.818 [2024-07-25 15:19:39.918038] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:47.818 [2024-07-25 15:19:39.918047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.818 [2024-07-25 15:19:39.918061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.818 [2024-07-25 15:19:39.918072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.818 [2024-07-25 15:19:39.918267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.818 [2024-07-25 15:19:39.918274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.818 [2024-07-25 15:19:39.918278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:47.818 [2024-07-25 15:19:39.918287] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:47.818 [2024-07-25 15:19:39.918292] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:47.818 [2024-07-25 15:19:39.918302] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:47.818 [2024-07-25 15:19:39.918408] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:47.818 [2024-07-25 15:19:39.918412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:47.818 [2024-07-25 15:19:39.918421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.818 [2024-07-25 15:19:39.918436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.818 [2024-07-25 15:19:39.918447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.818 [2024-07-25 15:19:39.918694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.818 [2024-07-25 15:19:39.918700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.818 [2024-07-25 15:19:39.918704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:47.818 [2024-07-25 15:19:39.918713] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:47.818 [2024-07-25 15:19:39.918721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.818 [2024-07-25 15:19:39.918735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.818 [2024-07-25 15:19:39.918746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.818 [2024-07-25 15:19:39.918920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.818 [2024-07-25 15:19:39.918927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.818 [2024-07-25 15:19:39.918930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:47.818 [2024-07-25 15:19:39.918938] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:47.818 [2024-07-25 15:19:39.918943] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:47.818 [2024-07-25 15:19:39.918950] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:47.818 [2024-07-25 15:19:39.918963] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:47.818 [2024-07-25 15:19:39.918973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.818 [2024-07-25 15:19:39.918977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.818 [2024-07-25 15:19:39.918984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.818 [2024-07-25 15:19:39.918995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.819 [2024-07-25 15:19:39.919185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.819 [2024-07-25 15:19:39.919193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.819 [2024-07-25 15:19:39.919199] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923213] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a2ec0): datao=0, datal=4096, cccid=0 00:23:47.819 [2024-07-25 15:19:39.923219] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1425e40) on tqpair(0x13a2ec0): expected_datao=0, payload_size=4096 00:23:47.819 [2024-07-25 15:19:39.923224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923232] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923236] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.819 [2024-07-25 15:19:39.923251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.819 [2024-07-25 15:19:39.923254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:47.819 [2024-07-25 15:19:39.923266] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:47.819 [2024-07-25 15:19:39.923271] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:47.819 [2024-07-25 15:19:39.923275] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:47.819 [2024-07-25 15:19:39.923280] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:47.819 [2024-07-25 15:19:39.923285] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:47.819 [2024-07-25 15:19:39.923290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:47.819 [2024-07-25 15:19:39.923299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:47.819 [2024-07-25 15:19:39.923310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.819 [2024-07-25 15:19:39.923325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:47.819 [2024-07-25 15:19:39.923339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.819 [2024-07-25 15:19:39.923597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.819 [2024-07-25 15:19:39.923604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.819 [2024-07-25 15:19:39.923608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:47.819 [2024-07-25 15:19:39.923620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a2ec0) 00:23:47.819 [2024-07-25 15:19:39.923633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.819 [2024-07-25 15:19:39.923640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13a2ec0) 00:23:47.819 [2024-07-25 15:19:39.923653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.819 [2024-07-25 15:19:39.923662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13a2ec0) 00:23:47.819 [2024-07-25 15:19:39.923675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.819 [2024-07-25 15:19:39.923680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a2ec0) 00:23:47.819 [2024-07-25 15:19:39.923693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.819 [2024-07-25 15:19:39.923698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:47.819 [2024-07-25 15:19:39.923709] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:47.819 [2024-07-25 15:19:39.923716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.923719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a2ec0) 00:23:47.819 [2024-07-25 15:19:39.923726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.819 [2024-07-25 15:19:39.923740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425e40, cid 0, qid 0 00:23:47.819 [2024-07-25 15:19:39.923745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1425fc0, cid 1, qid 0 00:23:47.819 [2024-07-25 15:19:39.923750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1426140, cid 2, qid 0 00:23:47.819 [2024-07-25 15:19:39.923754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14262c0, cid 3, qid 0 00:23:47.819 [2024-07-25 15:19:39.923759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1426440, cid 4, qid 0 00:23:47.819 [2024-07-25 15:19:39.924050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.819 [2024-07-25 15:19:39.924057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.819 [2024-07-25 15:19:39.924061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.924065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1426440) on tqpair=0x13a2ec0 00:23:47.819 [2024-07-25 15:19:39.924070] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:47.819 [2024-07-25 15:19:39.924075] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:47.819 [2024-07-25 15:19:39.924086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.924090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a2ec0) 00:23:47.819 [2024-07-25 15:19:39.924097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.819 [2024-07-25 15:19:39.924107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1426440, cid 4, qid 0 00:23:47.819 [2024-07-25 15:19:39.924404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.819 [2024-07-25 15:19:39.924411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.819 [2024-07-25 15:19:39.924415] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.924419] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a2ec0): datao=0, datal=4096, cccid=4 00:23:47.819 [2024-07-25 15:19:39.924426] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1426440) on tqpair(0x13a2ec0): expected_datao=0, payload_size=4096 00:23:47.819 [2024-07-25 15:19:39.924430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.924574] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.924578] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.967210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.819 [2024-07-25 15:19:39.967220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.819 [2024-07-25 15:19:39.967224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.967227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1426440) on tqpair=0x13a2ec0 00:23:47.819 [2024-07-25 15:19:39.967241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:47.819 [2024-07-25 15:19:39.967264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.967268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a2ec0) 00:23:47.819 [2024-07-25 15:19:39.967276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.819 [2024-07-25 15:19:39.967283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.967287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.967290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13a2ec0) 00:23:47.819 [2024-07-25 15:19:39.967296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.819 [2024-07-25 15:19:39.967311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1426440, cid 4, qid 0 00:23:47.819 [2024-07-25 15:19:39.967317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14265c0, cid 5, qid 0 00:23:47.819 [2024-07-25 15:19:39.967601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.819 [2024-07-25 15:19:39.967609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.819 [2024-07-25 15:19:39.967612] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.967616] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a2ec0): datao=0, datal=1024, cccid=4 00:23:47.819 [2024-07-25 15:19:39.967620] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1426440) on tqpair(0x13a2ec0): expected_datao=0, payload_size=1024 00:23:47.819 [2024-07-25 15:19:39.967624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.967631] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.967635] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.819 [2024-07-25 15:19:39.967640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.820 [2024-07-25 15:19:39.967646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.820 [2024-07-25 15:19:39.967650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.820 [2024-07-25 15:19:39.967653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14265c0) on tqpair=0x13a2ec0 00:23:48.085 [2024-07-25 15:19:40.008426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.085 [2024-07-25 15:19:40.008442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.085 [2024-07-25 15:19:40.008446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.008451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1426440) on tqpair=0x13a2ec0 00:23:48.085 [2024-07-25 15:19:40.008469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.008473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a2ec0) 00:23:48.085 [2024-07-25 15:19:40.008481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.085 [2024-07-25 15:19:40.008504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1426440, cid 4, qid 0 00:23:48.085 [2024-07-25 15:19:40.008744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.085 [2024-07-25 15:19:40.008751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.085 [2024-07-25 15:19:40.008755] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.008759] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a2ec0): datao=0, datal=3072, cccid=4 00:23:48.085 [2024-07-25 15:19:40.008763] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1426440) on tqpair(0x13a2ec0): expected_datao=0, payload_size=3072 00:23:48.085 [2024-07-25 15:19:40.008767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.008774] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.008778] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.008910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.085 [2024-07-25 15:19:40.008916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.085 [2024-07-25 15:19:40.008920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.008923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1426440) on tqpair=0x13a2ec0 00:23:48.085 [2024-07-25 15:19:40.008932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.008936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a2ec0) 00:23:48.085 [2024-07-25 15:19:40.008943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.085 [2024-07-25 15:19:40.008957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1426440, cid 4, qid 0 00:23:48.085 [2024-07-25 15:19:40.009211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.085 [2024-07-25 15:19:40.009218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.085 [2024-07-25 15:19:40.009222] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.009225] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a2ec0): datao=0, datal=8, cccid=4 00:23:48.085 [2024-07-25 15:19:40.009230] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1426440) on tqpair(0x13a2ec0): expected_datao=0, payload_size=8 00:23:48.085 [2024-07-25 15:19:40.009234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.009241] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.009245] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.049558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.085 [2024-07-25 15:19:40.049569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.085 [2024-07-25 15:19:40.049572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.085 [2024-07-25 15:19:40.049576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1426440) on tqpair=0x13a2ec0 00:23:48.085 ===================================================== 00:23:48.085 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:48.085 ===================================================== 00:23:48.085 Controller Capabilities/Features 00:23:48.085 ================================ 00:23:48.085 Vendor ID: 0000 00:23:48.086 Subsystem Vendor ID: 0000 00:23:48.086 Serial Number: .................... 00:23:48.086 Model Number: ........................................ 00:23:48.086 Firmware Version: 24.09 00:23:48.086 Recommended Arb Burst: 0 00:23:48.086 IEEE OUI Identifier: 00 00 00 00:23:48.086 Multi-path I/O 00:23:48.086 May have multiple subsystem ports: No 00:23:48.086 May have multiple controllers: No 00:23:48.086 Associated with SR-IOV VF: No 00:23:48.086 Max Data Transfer Size: 131072 00:23:48.086 Max Number of Namespaces: 0 00:23:48.086 Max Number of I/O Queues: 1024 00:23:48.086 NVMe Specification Version (VS): 1.3 00:23:48.086 NVMe Specification Version (Identify): 1.3 00:23:48.086 Maximum Queue Entries: 128 00:23:48.086 Contiguous Queues Required: Yes 00:23:48.086 Arbitration Mechanisms Supported 00:23:48.086 Weighted Round Robin: Not Supported 00:23:48.086 Vendor Specific: Not Supported 00:23:48.086 Reset Timeout: 15000 ms 00:23:48.086 Doorbell Stride: 4 bytes 00:23:48.086 NVM Subsystem Reset: Not Supported 00:23:48.086 Command Sets Supported 00:23:48.086 NVM Command Set: Supported 00:23:48.086 Boot Partition: Not Supported 00:23:48.086 Memory Page Size Minimum: 4096 bytes 00:23:48.086 Memory Page Size Maximum: 4096 bytes 00:23:48.086 Persistent Memory Region: Not Supported 00:23:48.086 Optional Asynchronous Events Supported 00:23:48.086 Namespace Attribute Notices: Not Supported 00:23:48.086 Firmware Activation Notices: Not Supported 00:23:48.086 ANA Change Notices: Not Supported 00:23:48.086 PLE Aggregate Log Change Notices: Not Supported 00:23:48.086 LBA Status Info Alert Notices: Not Supported 00:23:48.086 EGE Aggregate Log Change Notices: Not Supported 00:23:48.086 Normal NVM Subsystem Shutdown event: Not Supported 00:23:48.086 Zone Descriptor Change Notices: Not Supported 00:23:48.086 Discovery Log Change Notices: Supported 00:23:48.086 Controller Attributes 00:23:48.086 128-bit Host Identifier: Not Supported 00:23:48.086 Non-Operational Permissive Mode: Not Supported 00:23:48.086 NVM Sets: Not Supported 00:23:48.086 Read Recovery Levels: Not Supported 00:23:48.086 Endurance Groups: Not Supported 00:23:48.086 Predictable Latency Mode: Not Supported 00:23:48.086 Traffic Based Keep ALive: Not Supported 00:23:48.086 Namespace Granularity: Not Supported 00:23:48.086 SQ Associations: Not Supported 00:23:48.086 UUID List: Not Supported 00:23:48.086 Multi-Domain Subsystem: Not Supported 00:23:48.086 Fixed Capacity Management: Not Supported 00:23:48.086 Variable Capacity Management: Not Supported 00:23:48.086 Delete Endurance Group: Not Supported 00:23:48.086 Delete NVM Set: Not Supported 00:23:48.086 Extended LBA Formats Supported: Not Supported 00:23:48.086 Flexible Data Placement Supported: Not Supported 00:23:48.086 00:23:48.086 Controller Memory Buffer Support 00:23:48.086 ================================ 00:23:48.086 Supported: No 00:23:48.086 00:23:48.086 Persistent Memory Region Support 00:23:48.086 ================================ 00:23:48.086 Supported: No 00:23:48.086 00:23:48.086 Admin Command Set Attributes 00:23:48.086 ============================ 00:23:48.086 Security Send/Receive: Not Supported 00:23:48.086 Format NVM: Not Supported 00:23:48.086 Firmware Activate/Download: Not Supported 00:23:48.086 Namespace Management: Not Supported 00:23:48.086 Device Self-Test: Not Supported 00:23:48.086 Directives: Not Supported 00:23:48.086 NVMe-MI: Not Supported 00:23:48.086 Virtualization Management: Not Supported 00:23:48.086 Doorbell Buffer Config: Not Supported 00:23:48.086 Get LBA Status Capability: Not Supported 00:23:48.086 Command & Feature Lockdown Capability: Not Supported 00:23:48.086 Abort Command Limit: 1 00:23:48.086 Async Event Request Limit: 4 00:23:48.086 Number of Firmware Slots: N/A 00:23:48.086 Firmware Slot 1 Read-Only: N/A 00:23:48.086 Firmware Activation Without Reset: N/A 00:23:48.086 Multiple Update Detection Support: N/A 00:23:48.086 Firmware Update Granularity: No Information Provided 00:23:48.086 Per-Namespace SMART Log: No 00:23:48.086 Asymmetric Namespace Access Log Page: Not Supported 00:23:48.086 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:48.086 Command Effects Log Page: Not Supported 00:23:48.086 Get Log Page Extended Data: Supported 00:23:48.086 Telemetry Log Pages: Not Supported 00:23:48.086 Persistent Event Log Pages: Not Supported 00:23:48.086 Supported Log Pages Log Page: May Support 00:23:48.086 Commands Supported & Effects Log Page: Not Supported 00:23:48.086 Feature Identifiers & Effects Log Page:May Support 00:23:48.086 NVMe-MI Commands & Effects Log Page: May Support 00:23:48.086 Data Area 4 for Telemetry Log: Not Supported 00:23:48.086 Error Log Page Entries Supported: 128 00:23:48.086 Keep Alive: Not Supported 00:23:48.086 00:23:48.086 NVM Command Set Attributes 00:23:48.086 ========================== 00:23:48.086 Submission Queue Entry Size 00:23:48.086 Max: 1 00:23:48.086 Min: 1 00:23:48.086 Completion Queue Entry Size 00:23:48.086 Max: 1 00:23:48.086 Min: 1 00:23:48.086 Number of Namespaces: 0 00:23:48.086 Compare Command: Not Supported 00:23:48.086 Write Uncorrectable Command: Not Supported 00:23:48.086 Dataset Management Command: Not Supported 00:23:48.086 Write Zeroes Command: Not Supported 00:23:48.086 Set Features Save Field: Not Supported 00:23:48.086 Reservations: Not Supported 00:23:48.086 Timestamp: Not Supported 00:23:48.086 Copy: Not Supported 00:23:48.086 Volatile Write Cache: Not Present 00:23:48.086 Atomic Write Unit (Normal): 1 00:23:48.086 Atomic Write Unit (PFail): 1 00:23:48.086 Atomic Compare & Write Unit: 1 00:23:48.086 Fused Compare & Write: Supported 00:23:48.086 Scatter-Gather List 00:23:48.086 SGL Command Set: Supported 00:23:48.086 SGL Keyed: Supported 00:23:48.086 SGL Bit Bucket Descriptor: Not Supported 00:23:48.086 SGL Metadata Pointer: Not Supported 00:23:48.086 Oversized SGL: Not Supported 00:23:48.086 SGL Metadata Address: Not Supported 00:23:48.086 SGL Offset: Supported 00:23:48.086 Transport SGL Data Block: Not Supported 00:23:48.086 Replay Protected Memory Block: Not Supported 00:23:48.086 00:23:48.086 Firmware Slot Information 00:23:48.086 ========================= 00:23:48.086 Active slot: 0 00:23:48.086 00:23:48.086 00:23:48.086 Error Log 00:23:48.086 ========= 00:23:48.086 00:23:48.086 Active Namespaces 00:23:48.086 ================= 00:23:48.086 Discovery Log Page 00:23:48.086 ================== 00:23:48.086 Generation Counter: 2 00:23:48.086 Number of Records: 2 00:23:48.086 Record Format: 0 00:23:48.086 00:23:48.086 Discovery Log Entry 0 00:23:48.086 ---------------------- 00:23:48.086 Transport Type: 3 (TCP) 00:23:48.086 Address Family: 1 (IPv4) 00:23:48.086 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:48.086 Entry Flags: 00:23:48.086 Duplicate Returned Information: 1 00:23:48.086 Explicit Persistent Connection Support for Discovery: 1 00:23:48.086 Transport Requirements: 00:23:48.086 Secure Channel: Not Required 00:23:48.086 Port ID: 0 (0x0000) 00:23:48.086 Controller ID: 65535 (0xffff) 00:23:48.086 Admin Max SQ Size: 128 00:23:48.086 Transport Service Identifier: 4420 00:23:48.086 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:48.086 Transport Address: 10.0.0.2 00:23:48.086 Discovery Log Entry 1 00:23:48.086 ---------------------- 00:23:48.086 Transport Type: 3 (TCP) 00:23:48.086 Address Family: 1 (IPv4) 00:23:48.086 Subsystem Type: 2 (NVM Subsystem) 00:23:48.086 Entry Flags: 00:23:48.086 Duplicate Returned Information: 0 00:23:48.086 Explicit Persistent Connection Support for Discovery: 0 00:23:48.086 Transport Requirements: 00:23:48.086 Secure Channel: Not Required 00:23:48.086 Port ID: 0 (0x0000) 00:23:48.086 Controller ID: 65535 (0xffff) 00:23:48.086 Admin Max SQ Size: 128 00:23:48.086 Transport Service Identifier: 4420 00:23:48.086 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:48.086 Transport Address: 10.0.0.2 [2024-07-25 15:19:40.049663] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:48.086 [2024-07-25 15:19:40.049674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425e40) on tqpair=0x13a2ec0 00:23:48.086 [2024-07-25 15:19:40.049681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.086 [2024-07-25 15:19:40.049687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1425fc0) on tqpair=0x13a2ec0 00:23:48.086 [2024-07-25 15:19:40.049691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.086 [2024-07-25 15:19:40.049696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1426140) on tqpair=0x13a2ec0 00:23:48.086 [2024-07-25 15:19:40.049702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.086 [2024-07-25 15:19:40.049706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14262c0) on tqpair=0x13a2ec0 00:23:48.086 [2024-07-25 15:19:40.049711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.086 [2024-07-25 15:19:40.049722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.086 [2024-07-25 15:19:40.049726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.086 [2024-07-25 15:19:40.049730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a2ec0) 00:23:48.086 [2024-07-25 15:19:40.049737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.086 [2024-07-25 15:19:40.049751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14262c0, cid 3, qid 0 00:23:48.086 [2024-07-25 15:19:40.049989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.086 [2024-07-25 15:19:40.049997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.086 [2024-07-25 15:19:40.050001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.086 [2024-07-25 15:19:40.050005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14262c0) on tqpair=0x13a2ec0 00:23:48.087 [2024-07-25 15:19:40.050012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a2ec0) 00:23:48.087 [2024-07-25 15:19:40.050026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.087 [2024-07-25 15:19:40.050041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14262c0, cid 3, qid 0 00:23:48.087 [2024-07-25 15:19:40.050223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.087 [2024-07-25 15:19:40.050230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.087 [2024-07-25 15:19:40.050234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14262c0) on tqpair=0x13a2ec0 00:23:48.087 [2024-07-25 15:19:40.050242] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:48.087 [2024-07-25 15:19:40.050247] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:48.087 [2024-07-25 15:19:40.050256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a2ec0) 00:23:48.087 [2024-07-25 15:19:40.050270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.087 [2024-07-25 15:19:40.050281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14262c0, cid 3, qid 0 00:23:48.087 [2024-07-25 15:19:40.050438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.087 [2024-07-25 15:19:40.050445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.087 [2024-07-25 15:19:40.050448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14262c0) on tqpair=0x13a2ec0 00:23:48.087 [2024-07-25 15:19:40.050462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a2ec0) 00:23:48.087 [2024-07-25 15:19:40.050476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.087 [2024-07-25 15:19:40.050489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14262c0, cid 3, qid 0 00:23:48.087 [2024-07-25 15:19:40.050748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.087 [2024-07-25 15:19:40.050755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.087 [2024-07-25 15:19:40.050758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14262c0) on tqpair=0x13a2ec0 00:23:48.087 [2024-07-25 15:19:40.050772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.050779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a2ec0) 00:23:48.087 [2024-07-25 15:19:40.050785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.087 [2024-07-25 15:19:40.050795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14262c0, cid 3, qid 0 00:23:48.087 [2024-07-25 15:19:40.054211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.087 [2024-07-25 15:19:40.054222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.087 [2024-07-25 15:19:40.054225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.054229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14262c0) on tqpair=0x13a2ec0 00:23:48.087 [2024-07-25 15:19:40.054240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.054244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.054247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a2ec0) 00:23:48.087 [2024-07-25 15:19:40.054254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.087 [2024-07-25 15:19:40.054267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14262c0, cid 3, qid 0 00:23:48.087 [2024-07-25 15:19:40.054507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.087 [2024-07-25 15:19:40.054514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.087 [2024-07-25 15:19:40.054518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.054521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14262c0) on tqpair=0x13a2ec0 00:23:48.087 [2024-07-25 15:19:40.054529] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:23:48.087 00:23:48.087 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:48.087 [2024-07-25 15:19:40.095651] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:48.087 [2024-07-25 15:19:40.095693] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342946 ] 00:23:48.087 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.087 [2024-07-25 15:19:40.131809] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:48.087 [2024-07-25 15:19:40.131852] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:48.087 [2024-07-25 15:19:40.131857] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:48.087 [2024-07-25 15:19:40.131871] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:48.087 [2024-07-25 15:19:40.131879] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:48.087 [2024-07-25 15:19:40.132438] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:48.087 [2024-07-25 15:19:40.132461] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb87ec0 0 00:23:48.087 [2024-07-25 15:19:40.147209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:48.087 [2024-07-25 15:19:40.147226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:48.087 [2024-07-25 15:19:40.147230] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:48.087 [2024-07-25 15:19:40.147234] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:48.087 [2024-07-25 15:19:40.147266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.147271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.147275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.087 [2024-07-25 15:19:40.147286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:48.087 [2024-07-25 15:19:40.147302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.087 [2024-07-25 15:19:40.155211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.087 [2024-07-25 15:19:40.155220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.087 [2024-07-25 15:19:40.155223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.155228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.087 [2024-07-25 15:19:40.155236] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:48.087 [2024-07-25 15:19:40.155243] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:48.087 [2024-07-25 15:19:40.155248] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:48.087 [2024-07-25 15:19:40.155259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.155263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.155267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.087 [2024-07-25 15:19:40.155275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.087 [2024-07-25 15:19:40.155287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.087 [2024-07-25 15:19:40.155525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.087 [2024-07-25 15:19:40.155532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.087 [2024-07-25 15:19:40.155535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.155539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.087 [2024-07-25 15:19:40.155547] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:48.087 [2024-07-25 15:19:40.155555] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:48.087 [2024-07-25 15:19:40.155562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.155565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.155569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.087 [2024-07-25 15:19:40.155576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.087 [2024-07-25 15:19:40.155590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.087 [2024-07-25 15:19:40.155842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.087 [2024-07-25 15:19:40.155848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.087 [2024-07-25 15:19:40.155851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.155855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.087 [2024-07-25 15:19:40.155861] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:48.087 [2024-07-25 15:19:40.155868] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:48.087 [2024-07-25 15:19:40.155875] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.155878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.155882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.087 [2024-07-25 15:19:40.155889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.087 [2024-07-25 15:19:40.155899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.087 [2024-07-25 15:19:40.156143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.087 [2024-07-25 15:19:40.156149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.087 [2024-07-25 15:19:40.156152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.156156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.087 [2024-07-25 15:19:40.156161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:48.087 [2024-07-25 15:19:40.156170] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.156174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.087 [2024-07-25 15:19:40.156178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.087 [2024-07-25 15:19:40.156184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.088 [2024-07-25 15:19:40.156195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.088 [2024-07-25 15:19:40.156409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.088 [2024-07-25 15:19:40.156416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.088 [2024-07-25 15:19:40.156420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.156424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.088 [2024-07-25 15:19:40.156429] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:48.088 [2024-07-25 15:19:40.156434] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:48.088 [2024-07-25 15:19:40.156441] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:48.088 [2024-07-25 15:19:40.156547] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:48.088 [2024-07-25 15:19:40.156550] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:48.088 [2024-07-25 15:19:40.156559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.156562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.156566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.156575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.088 [2024-07-25 15:19:40.156587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.088 [2024-07-25 15:19:40.156816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.088 [2024-07-25 15:19:40.156822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.088 [2024-07-25 15:19:40.156825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.156829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.088 [2024-07-25 15:19:40.156834] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:48.088 [2024-07-25 15:19:40.156843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.156847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.156850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.156857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.088 [2024-07-25 15:19:40.156867] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.088 [2024-07-25 15:19:40.157108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.088 [2024-07-25 15:19:40.157114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.088 [2024-07-25 15:19:40.157117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.157121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.088 [2024-07-25 15:19:40.157125] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:48.088 [2024-07-25 15:19:40.157130] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:48.088 [2024-07-25 15:19:40.157138] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:48.088 [2024-07-25 15:19:40.157146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:48.088 [2024-07-25 15:19:40.157155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.157159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.157166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.088 [2024-07-25 15:19:40.157176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.088 [2024-07-25 15:19:40.157456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.088 [2024-07-25 15:19:40.157464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.088 [2024-07-25 15:19:40.157467] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.157471] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb87ec0): datao=0, datal=4096, cccid=0 00:23:48.088 [2024-07-25 15:19:40.157475] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0ae40) on tqpair(0xb87ec0): expected_datao=0, payload_size=4096 00:23:48.088 [2024-07-25 15:19:40.157480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.157487] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.157491] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.157670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.088 [2024-07-25 15:19:40.157676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.088 [2024-07-25 15:19:40.157683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.157687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.088 [2024-07-25 15:19:40.157694] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:48.088 [2024-07-25 15:19:40.157699] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:48.088 [2024-07-25 15:19:40.157703] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:48.088 [2024-07-25 15:19:40.157708] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:48.088 [2024-07-25 15:19:40.157712] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:48.088 [2024-07-25 15:19:40.157717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:48.088 [2024-07-25 15:19:40.157725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:48.088 [2024-07-25 15:19:40.157734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.157739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.157742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.157749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:48.088 [2024-07-25 15:19:40.157761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.088 [2024-07-25 15:19:40.157990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.088 [2024-07-25 15:19:40.157997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.088 [2024-07-25 15:19:40.158000] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.088 [2024-07-25 15:19:40.158011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.158025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.088 [2024-07-25 15:19:40.158031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.158043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.088 [2024-07-25 15:19:40.158049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.158062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.088 [2024-07-25 15:19:40.158068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.158080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.088 [2024-07-25 15:19:40.158088] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:48.088 [2024-07-25 15:19:40.158098] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:48.088 [2024-07-25 15:19:40.158105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.158115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.088 [2024-07-25 15:19:40.158127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0ae40, cid 0, qid 0 00:23:48.088 [2024-07-25 15:19:40.158133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0afc0, cid 1, qid 0 00:23:48.088 [2024-07-25 15:19:40.158137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b140, cid 2, qid 0 00:23:48.088 [2024-07-25 15:19:40.158142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.088 [2024-07-25 15:19:40.158147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b440, cid 4, qid 0 00:23:48.088 [2024-07-25 15:19:40.158379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.088 [2024-07-25 15:19:40.158386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.088 [2024-07-25 15:19:40.158389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b440) on tqpair=0xb87ec0 00:23:48.088 [2024-07-25 15:19:40.158399] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:48.088 [2024-07-25 15:19:40.158404] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:48.088 [2024-07-25 15:19:40.158414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:48.088 [2024-07-25 15:19:40.158421] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:48.088 [2024-07-25 15:19:40.158427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.158435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb87ec0) 00:23:48.088 [2024-07-25 15:19:40.158441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:48.088 [2024-07-25 15:19:40.158452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b440, cid 4, qid 0 00:23:48.088 [2024-07-25 15:19:40.162209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.088 [2024-07-25 15:19:40.162219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.088 [2024-07-25 15:19:40.162223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.088 [2024-07-25 15:19:40.162227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b440) on tqpair=0xb87ec0 00:23:48.088 [2024-07-25 15:19:40.162293] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.162303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.162311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.162314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb87ec0) 00:23:48.089 [2024-07-25 15:19:40.162321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.089 [2024-07-25 15:19:40.162336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b440, cid 4, qid 0 00:23:48.089 [2024-07-25 15:19:40.162547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.089 [2024-07-25 15:19:40.162554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.089 [2024-07-25 15:19:40.162557] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.162561] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb87ec0): datao=0, datal=4096, cccid=4 00:23:48.089 [2024-07-25 15:19:40.162565] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0b440) on tqpair(0xb87ec0): expected_datao=0, payload_size=4096 00:23:48.089 [2024-07-25 15:19:40.162570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.162636] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.162640] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.162972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.089 [2024-07-25 15:19:40.162978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.089 [2024-07-25 15:19:40.162981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.162984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b440) on tqpair=0xb87ec0 00:23:48.089 [2024-07-25 15:19:40.162995] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:48.089 [2024-07-25 15:19:40.163011] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.163020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.163027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb87ec0) 00:23:48.089 [2024-07-25 15:19:40.163038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.089 [2024-07-25 15:19:40.163049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b440, cid 4, qid 0 00:23:48.089 [2024-07-25 15:19:40.163299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.089 [2024-07-25 15:19:40.163306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.089 [2024-07-25 15:19:40.163309] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163313] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb87ec0): datao=0, datal=4096, cccid=4 00:23:48.089 [2024-07-25 15:19:40.163317] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0b440) on tqpair(0xb87ec0): expected_datao=0, payload_size=4096 00:23:48.089 [2024-07-25 15:19:40.163322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163387] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163391] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.089 [2024-07-25 15:19:40.163605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.089 [2024-07-25 15:19:40.163608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b440) on tqpair=0xb87ec0 00:23:48.089 [2024-07-25 15:19:40.163625] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.163635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.163646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb87ec0) 00:23:48.089 [2024-07-25 15:19:40.163656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.089 [2024-07-25 15:19:40.163668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b440, cid 4, qid 0 00:23:48.089 [2024-07-25 15:19:40.163892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.089 [2024-07-25 15:19:40.163898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.089 [2024-07-25 15:19:40.163901] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163905] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb87ec0): datao=0, datal=4096, cccid=4 00:23:48.089 [2024-07-25 15:19:40.163909] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0b440) on tqpair(0xb87ec0): expected_datao=0, payload_size=4096 00:23:48.089 [2024-07-25 15:19:40.163913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163978] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.163982] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.164187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.089 [2024-07-25 15:19:40.164193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.089 [2024-07-25 15:19:40.164196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.164206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b440) on tqpair=0xb87ec0 00:23:48.089 [2024-07-25 15:19:40.164214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.164222] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.164231] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.164239] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.164244] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.164249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.164254] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:48.089 [2024-07-25 15:19:40.164258] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:48.089 [2024-07-25 15:19:40.164263] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:48.089 [2024-07-25 15:19:40.164277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.164281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb87ec0) 00:23:48.089 [2024-07-25 15:19:40.164287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.089 [2024-07-25 15:19:40.164294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.164298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.164301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb87ec0) 00:23:48.089 [2024-07-25 15:19:40.164307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.089 [2024-07-25 15:19:40.164324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b440, cid 4, qid 0 00:23:48.089 [2024-07-25 15:19:40.164330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b5c0, cid 5, qid 0 00:23:48.089 [2024-07-25 15:19:40.164574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.089 [2024-07-25 15:19:40.164581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.089 [2024-07-25 15:19:40.164584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.164588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b440) on tqpair=0xb87ec0 00:23:48.089 [2024-07-25 15:19:40.164595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.089 [2024-07-25 15:19:40.164600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.089 [2024-07-25 15:19:40.164604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.164607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b5c0) on tqpair=0xb87ec0 00:23:48.089 [2024-07-25 15:19:40.164617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.089 [2024-07-25 15:19:40.164620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb87ec0) 00:23:48.089 [2024-07-25 15:19:40.164627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.089 [2024-07-25 15:19:40.164637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b5c0, cid 5, qid 0 00:23:48.089 [2024-07-25 15:19:40.164842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.089 [2024-07-25 15:19:40.164848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.090 [2024-07-25 15:19:40.164852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.164855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b5c0) on tqpair=0xb87ec0 00:23:48.090 [2024-07-25 15:19:40.164864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.164868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb87ec0) 00:23:48.090 [2024-07-25 15:19:40.164874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.090 [2024-07-25 15:19:40.164884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b5c0, cid 5, qid 0 00:23:48.090 [2024-07-25 15:19:40.165130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.090 [2024-07-25 15:19:40.165136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.090 [2024-07-25 15:19:40.165139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.165143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b5c0) on tqpair=0xb87ec0 00:23:48.090 [2024-07-25 15:19:40.165152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.165155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb87ec0) 00:23:48.090 [2024-07-25 15:19:40.165162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.090 [2024-07-25 15:19:40.165171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b5c0, cid 5, qid 0 00:23:48.090 [2024-07-25 15:19:40.165404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.090 [2024-07-25 15:19:40.165411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.090 [2024-07-25 15:19:40.165414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.165418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b5c0) on tqpair=0xb87ec0 00:23:48.090 [2024-07-25 15:19:40.165433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.165437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb87ec0) 00:23:48.090 [2024-07-25 15:19:40.165446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.090 [2024-07-25 15:19:40.165453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.165457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb87ec0) 00:23:48.090 [2024-07-25 15:19:40.165463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.090 [2024-07-25 15:19:40.165470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.165473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb87ec0) 00:23:48.090 [2024-07-25 15:19:40.165479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.090 [2024-07-25 15:19:40.165487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.165490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb87ec0) 00:23:48.090 [2024-07-25 15:19:40.165496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.090 [2024-07-25 15:19:40.165508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b5c0, cid 5, qid 0 00:23:48.090 [2024-07-25 15:19:40.165513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b440, cid 4, qid 0 00:23:48.090 [2024-07-25 15:19:40.165518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b740, cid 6, qid 0 00:23:48.090 [2024-07-25 15:19:40.165522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b8c0, cid 7, qid 0 00:23:48.090 [2024-07-25 15:19:40.165810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.090 [2024-07-25 15:19:40.165817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.090 [2024-07-25 15:19:40.165820] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.165823] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb87ec0): datao=0, datal=8192, cccid=5 00:23:48.090 [2024-07-25 15:19:40.165828] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0b5c0) on tqpair(0xb87ec0): expected_datao=0, payload_size=8192 00:23:48.090 [2024-07-25 15:19:40.165832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.166162] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.166166] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.166172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.090 [2024-07-25 15:19:40.166177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.090 [2024-07-25 15:19:40.166181] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.166184] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb87ec0): datao=0, datal=512, cccid=4 00:23:48.090 [2024-07-25 15:19:40.166188] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0b440) on tqpair(0xb87ec0): expected_datao=0, payload_size=512 00:23:48.090 [2024-07-25 15:19:40.166193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.166199] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170212] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.090 [2024-07-25 15:19:40.170224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.090 [2024-07-25 15:19:40.170228] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170231] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb87ec0): datao=0, datal=512, cccid=6 00:23:48.090 [2024-07-25 15:19:40.170239] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0b740) on tqpair(0xb87ec0): expected_datao=0, payload_size=512 00:23:48.090 [2024-07-25 15:19:40.170243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170249] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170253] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:48.090 [2024-07-25 15:19:40.170264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:48.090 [2024-07-25 15:19:40.170267] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170271] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb87ec0): datao=0, datal=4096, cccid=7 00:23:48.090 [2024-07-25 15:19:40.170275] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0b8c0) on tqpair(0xb87ec0): expected_datao=0, payload_size=4096 00:23:48.090 [2024-07-25 15:19:40.170279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170285] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170289] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.090 [2024-07-25 15:19:40.170302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.090 [2024-07-25 15:19:40.170305] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b5c0) on tqpair=0xb87ec0 00:23:48.090 [2024-07-25 15:19:40.170322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.090 [2024-07-25 15:19:40.170328] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.090 [2024-07-25 15:19:40.170331] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170335] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b440) on tqpair=0xb87ec0 00:23:48.090 [2024-07-25 15:19:40.170344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.090 [2024-07-25 15:19:40.170350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.090 [2024-07-25 15:19:40.170353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b740) on tqpair=0xb87ec0 00:23:48.090 [2024-07-25 15:19:40.170364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.090 [2024-07-25 15:19:40.170369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.090 [2024-07-25 15:19:40.170373] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.090 [2024-07-25 15:19:40.170376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b8c0) on tqpair=0xb87ec0 00:23:48.090 ===================================================== 00:23:48.090 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.090 ===================================================== 00:23:48.090 Controller Capabilities/Features 00:23:48.090 ================================ 00:23:48.090 Vendor ID: 8086 00:23:48.090 Subsystem Vendor ID: 8086 00:23:48.090 Serial Number: SPDK00000000000001 00:23:48.090 Model Number: SPDK bdev Controller 00:23:48.090 Firmware Version: 24.09 00:23:48.090 Recommended Arb Burst: 6 00:23:48.090 IEEE OUI Identifier: e4 d2 5c 00:23:48.090 Multi-path I/O 00:23:48.090 May have multiple subsystem ports: Yes 00:23:48.090 May have multiple controllers: Yes 00:23:48.090 Associated with SR-IOV VF: No 00:23:48.090 Max Data Transfer Size: 131072 00:23:48.090 Max Number of Namespaces: 32 00:23:48.090 Max Number of I/O Queues: 127 00:23:48.090 NVMe Specification Version (VS): 1.3 00:23:48.090 NVMe Specification Version (Identify): 1.3 00:23:48.090 Maximum Queue Entries: 128 00:23:48.090 Contiguous Queues Required: Yes 00:23:48.090 Arbitration Mechanisms Supported 00:23:48.090 Weighted Round Robin: Not Supported 00:23:48.090 Vendor Specific: Not Supported 00:23:48.090 Reset Timeout: 15000 ms 00:23:48.090 Doorbell Stride: 4 bytes 00:23:48.090 NVM Subsystem Reset: Not Supported 00:23:48.090 Command Sets Supported 00:23:48.090 NVM Command Set: Supported 00:23:48.090 Boot Partition: Not Supported 00:23:48.090 Memory Page Size Minimum: 4096 bytes 00:23:48.090 Memory Page Size Maximum: 4096 bytes 00:23:48.090 Persistent Memory Region: Not Supported 00:23:48.090 Optional Asynchronous Events Supported 00:23:48.090 Namespace Attribute Notices: Supported 00:23:48.090 Firmware Activation Notices: Not Supported 00:23:48.090 ANA Change Notices: Not Supported 00:23:48.090 PLE Aggregate Log Change Notices: Not Supported 00:23:48.090 LBA Status Info Alert Notices: Not Supported 00:23:48.090 EGE Aggregate Log Change Notices: Not Supported 00:23:48.090 Normal NVM Subsystem Shutdown event: Not Supported 00:23:48.090 Zone Descriptor Change Notices: Not Supported 00:23:48.090 Discovery Log Change Notices: Not Supported 00:23:48.090 Controller Attributes 00:23:48.090 128-bit Host Identifier: Supported 00:23:48.090 Non-Operational Permissive Mode: Not Supported 00:23:48.090 NVM Sets: Not Supported 00:23:48.091 Read Recovery Levels: Not Supported 00:23:48.091 Endurance Groups: Not Supported 00:23:48.091 Predictable Latency Mode: Not Supported 00:23:48.091 Traffic Based Keep ALive: Not Supported 00:23:48.091 Namespace Granularity: Not Supported 00:23:48.091 SQ Associations: Not Supported 00:23:48.091 UUID List: Not Supported 00:23:48.091 Multi-Domain Subsystem: Not Supported 00:23:48.091 Fixed Capacity Management: Not Supported 00:23:48.091 Variable Capacity Management: Not Supported 00:23:48.091 Delete Endurance Group: Not Supported 00:23:48.091 Delete NVM Set: Not Supported 00:23:48.091 Extended LBA Formats Supported: Not Supported 00:23:48.091 Flexible Data Placement Supported: Not Supported 00:23:48.091 00:23:48.091 Controller Memory Buffer Support 00:23:48.091 ================================ 00:23:48.091 Supported: No 00:23:48.091 00:23:48.091 Persistent Memory Region Support 00:23:48.091 ================================ 00:23:48.091 Supported: No 00:23:48.091 00:23:48.091 Admin Command Set Attributes 00:23:48.091 ============================ 00:23:48.091 Security Send/Receive: Not Supported 00:23:48.091 Format NVM: Not Supported 00:23:48.091 Firmware Activate/Download: Not Supported 00:23:48.091 Namespace Management: Not Supported 00:23:48.091 Device Self-Test: Not Supported 00:23:48.091 Directives: Not Supported 00:23:48.091 NVMe-MI: Not Supported 00:23:48.091 Virtualization Management: Not Supported 00:23:48.091 Doorbell Buffer Config: Not Supported 00:23:48.091 Get LBA Status Capability: Not Supported 00:23:48.091 Command & Feature Lockdown Capability: Not Supported 00:23:48.091 Abort Command Limit: 4 00:23:48.091 Async Event Request Limit: 4 00:23:48.091 Number of Firmware Slots: N/A 00:23:48.091 Firmware Slot 1 Read-Only: N/A 00:23:48.091 Firmware Activation Without Reset: N/A 00:23:48.091 Multiple Update Detection Support: N/A 00:23:48.091 Firmware Update Granularity: No Information Provided 00:23:48.091 Per-Namespace SMART Log: No 00:23:48.091 Asymmetric Namespace Access Log Page: Not Supported 00:23:48.091 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:48.091 Command Effects Log Page: Supported 00:23:48.091 Get Log Page Extended Data: Supported 00:23:48.091 Telemetry Log Pages: Not Supported 00:23:48.091 Persistent Event Log Pages: Not Supported 00:23:48.091 Supported Log Pages Log Page: May Support 00:23:48.091 Commands Supported & Effects Log Page: Not Supported 00:23:48.091 Feature Identifiers & Effects Log Page:May Support 00:23:48.091 NVMe-MI Commands & Effects Log Page: May Support 00:23:48.091 Data Area 4 for Telemetry Log: Not Supported 00:23:48.091 Error Log Page Entries Supported: 128 00:23:48.091 Keep Alive: Supported 00:23:48.091 Keep Alive Granularity: 10000 ms 00:23:48.091 00:23:48.091 NVM Command Set Attributes 00:23:48.091 ========================== 00:23:48.091 Submission Queue Entry Size 00:23:48.091 Max: 64 00:23:48.091 Min: 64 00:23:48.091 Completion Queue Entry Size 00:23:48.091 Max: 16 00:23:48.091 Min: 16 00:23:48.091 Number of Namespaces: 32 00:23:48.091 Compare Command: Supported 00:23:48.091 Write Uncorrectable Command: Not Supported 00:23:48.091 Dataset Management Command: Supported 00:23:48.091 Write Zeroes Command: Supported 00:23:48.091 Set Features Save Field: Not Supported 00:23:48.091 Reservations: Supported 00:23:48.091 Timestamp: Not Supported 00:23:48.091 Copy: Supported 00:23:48.091 Volatile Write Cache: Present 00:23:48.091 Atomic Write Unit (Normal): 1 00:23:48.091 Atomic Write Unit (PFail): 1 00:23:48.091 Atomic Compare & Write Unit: 1 00:23:48.091 Fused Compare & Write: Supported 00:23:48.091 Scatter-Gather List 00:23:48.091 SGL Command Set: Supported 00:23:48.091 SGL Keyed: Supported 00:23:48.091 SGL Bit Bucket Descriptor: Not Supported 00:23:48.091 SGL Metadata Pointer: Not Supported 00:23:48.091 Oversized SGL: Not Supported 00:23:48.091 SGL Metadata Address: Not Supported 00:23:48.091 SGL Offset: Supported 00:23:48.091 Transport SGL Data Block: Not Supported 00:23:48.091 Replay Protected Memory Block: Not Supported 00:23:48.091 00:23:48.091 Firmware Slot Information 00:23:48.091 ========================= 00:23:48.091 Active slot: 1 00:23:48.091 Slot 1 Firmware Revision: 24.09 00:23:48.091 00:23:48.091 00:23:48.091 Commands Supported and Effects 00:23:48.091 ============================== 00:23:48.091 Admin Commands 00:23:48.091 -------------- 00:23:48.091 Get Log Page (02h): Supported 00:23:48.091 Identify (06h): Supported 00:23:48.091 Abort (08h): Supported 00:23:48.091 Set Features (09h): Supported 00:23:48.091 Get Features (0Ah): Supported 00:23:48.091 Asynchronous Event Request (0Ch): Supported 00:23:48.091 Keep Alive (18h): Supported 00:23:48.091 I/O Commands 00:23:48.091 ------------ 00:23:48.091 Flush (00h): Supported LBA-Change 00:23:48.091 Write (01h): Supported LBA-Change 00:23:48.091 Read (02h): Supported 00:23:48.091 Compare (05h): Supported 00:23:48.091 Write Zeroes (08h): Supported LBA-Change 00:23:48.091 Dataset Management (09h): Supported LBA-Change 00:23:48.091 Copy (19h): Supported LBA-Change 00:23:48.091 00:23:48.091 Error Log 00:23:48.091 ========= 00:23:48.091 00:23:48.091 Arbitration 00:23:48.091 =========== 00:23:48.091 Arbitration Burst: 1 00:23:48.091 00:23:48.091 Power Management 00:23:48.091 ================ 00:23:48.091 Number of Power States: 1 00:23:48.091 Current Power State: Power State #0 00:23:48.091 Power State #0: 00:23:48.091 Max Power: 0.00 W 00:23:48.091 Non-Operational State: Operational 00:23:48.091 Entry Latency: Not Reported 00:23:48.091 Exit Latency: Not Reported 00:23:48.091 Relative Read Throughput: 0 00:23:48.091 Relative Read Latency: 0 00:23:48.091 Relative Write Throughput: 0 00:23:48.091 Relative Write Latency: 0 00:23:48.091 Idle Power: Not Reported 00:23:48.091 Active Power: Not Reported 00:23:48.091 Non-Operational Permissive Mode: Not Supported 00:23:48.091 00:23:48.091 Health Information 00:23:48.091 ================== 00:23:48.091 Critical Warnings: 00:23:48.091 Available Spare Space: OK 00:23:48.091 Temperature: OK 00:23:48.091 Device Reliability: OK 00:23:48.091 Read Only: No 00:23:48.091 Volatile Memory Backup: OK 00:23:48.091 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:48.091 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:48.091 Available Spare: 0% 00:23:48.091 Available Spare Threshold: 0% 00:23:48.091 Life Percentage Used:[2024-07-25 15:19:40.170475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.091 [2024-07-25 15:19:40.170480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb87ec0) 00:23:48.091 [2024-07-25 15:19:40.170487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.091 [2024-07-25 15:19:40.170501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b8c0, cid 7, qid 0 00:23:48.091 [2024-07-25 15:19:40.170738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.091 [2024-07-25 15:19:40.170744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.091 [2024-07-25 15:19:40.170748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.091 [2024-07-25 15:19:40.170752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b8c0) on tqpair=0xb87ec0 00:23:48.091 [2024-07-25 15:19:40.170782] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:48.091 [2024-07-25 15:19:40.170791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0ae40) on tqpair=0xb87ec0 00:23:48.091 [2024-07-25 15:19:40.170799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.091 [2024-07-25 15:19:40.170805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0afc0) on tqpair=0xb87ec0 00:23:48.091 [2024-07-25 15:19:40.170809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.091 [2024-07-25 15:19:40.170814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b140) on tqpair=0xb87ec0 00:23:48.091 [2024-07-25 15:19:40.170819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.091 [2024-07-25 15:19:40.170823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.091 [2024-07-25 15:19:40.170828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.091 [2024-07-25 15:19:40.170836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.091 [2024-07-25 15:19:40.170840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.091 [2024-07-25 15:19:40.170843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.091 [2024-07-25 15:19:40.170850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.091 [2024-07-25 15:19:40.170863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.091 [2024-07-25 15:19:40.171079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.091 [2024-07-25 15:19:40.171086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.091 [2024-07-25 15:19:40.171089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.091 [2024-07-25 15:19:40.171093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.091 [2024-07-25 15:19:40.171100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.091 [2024-07-25 15:19:40.171103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.091 [2024-07-25 15:19:40.171107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.091 [2024-07-25 15:19:40.171113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.091 [2024-07-25 15:19:40.171127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.091 [2024-07-25 15:19:40.171374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.091 [2024-07-25 15:19:40.171381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.091 [2024-07-25 15:19:40.171384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.171388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.171393] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:48.092 [2024-07-25 15:19:40.171397] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:48.092 [2024-07-25 15:19:40.171407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.171410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.171414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.171420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.171431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.171673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.171679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.171682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.171689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.171699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.171703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.171706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.171713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.171723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.172058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.172064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.172068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.172081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.172095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.172104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.172331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.172337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.172341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.172354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.172368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.172378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.172576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.172583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.172586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.172599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.172613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.172623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.172824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.172830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.172833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.172849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.172857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.172863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.172873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.173068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.173075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.173078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.173091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.173105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.173115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.173342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.173349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.173352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.173366] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.173380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.173390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.173591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.173597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.173601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.173614] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.173627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.173637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.173859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.173865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.173869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.173882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.173892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.173899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.173909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.174128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.174134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.174137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.174141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.174151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.174155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.174158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb87ec0) 00:23:48.092 [2024-07-25 15:19:40.174165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.092 [2024-07-25 15:19:40.174174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0b2c0, cid 3, qid 0 00:23:48.092 [2024-07-25 15:19:40.178209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:48.092 [2024-07-25 15:19:40.178217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:48.092 [2024-07-25 15:19:40.178221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:48.092 [2024-07-25 15:19:40.178225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0b2c0) on tqpair=0xb87ec0 00:23:48.092 [2024-07-25 15:19:40.178232] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:48.092 0% 00:23:48.092 Data Units Read: 0 00:23:48.092 Data Units Written: 0 00:23:48.092 Host Read Commands: 0 00:23:48.092 Host Write Commands: 0 00:23:48.092 Controller Busy Time: 0 minutes 00:23:48.092 Power Cycles: 0 00:23:48.092 Power On Hours: 0 hours 00:23:48.092 Unsafe Shutdowns: 0 00:23:48.092 Unrecoverable Media Errors: 0 00:23:48.092 Lifetime Error Log Entries: 0 00:23:48.092 Warning Temperature Time: 0 minutes 00:23:48.092 Critical Temperature Time: 0 minutes 00:23:48.092 00:23:48.092 Number of Queues 00:23:48.092 ================ 00:23:48.092 Number of I/O Submission Queues: 127 00:23:48.092 Number of I/O Completion Queues: 127 00:23:48.092 00:23:48.092 Active Namespaces 00:23:48.092 ================= 00:23:48.092 Namespace ID:1 00:23:48.092 Error Recovery Timeout: Unlimited 00:23:48.092 Command Set Identifier: NVM (00h) 00:23:48.092 Deallocate: Supported 00:23:48.092 Deallocated/Unwritten Error: Not Supported 00:23:48.092 Deallocated Read Value: Unknown 00:23:48.092 Deallocate in Write Zeroes: Not Supported 00:23:48.092 Deallocated Guard Field: 0xFFFF 00:23:48.092 Flush: Supported 00:23:48.092 Reservation: Supported 00:23:48.092 Namespace Sharing Capabilities: Multiple Controllers 00:23:48.092 Size (in LBAs): 131072 (0GiB) 00:23:48.093 Capacity (in LBAs): 131072 (0GiB) 00:23:48.093 Utilization (in LBAs): 131072 (0GiB) 00:23:48.093 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:48.093 EUI64: ABCDEF0123456789 00:23:48.093 UUID: 3bf21805-52a6-456e-9b70-c8c18a4deaf9 00:23:48.093 Thin Provisioning: Not Supported 00:23:48.093 Per-NS Atomic Units: Yes 00:23:48.093 Atomic Boundary Size (Normal): 0 00:23:48.093 Atomic Boundary Size (PFail): 0 00:23:48.093 Atomic Boundary Offset: 0 00:23:48.093 Maximum Single Source Range Length: 65535 00:23:48.093 Maximum Copy Length: 65535 00:23:48.093 Maximum Source Range Count: 1 00:23:48.093 NGUID/EUI64 Never Reused: No 00:23:48.093 Namespace Write Protected: No 00:23:48.093 Number of LBA Formats: 1 00:23:48.093 Current LBA Format: LBA Format #00 00:23:48.093 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:48.093 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.093 rmmod nvme_tcp 00:23:48.093 rmmod nvme_fabrics 00:23:48.093 rmmod nvme_keyring 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 342595 ']' 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 342595 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 342595 ']' 00:23:48.093 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 342595 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 342595 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 342595' 00:23:48.354 killing process with pid 342595 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 342595 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 342595 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.354 15:19:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.900 00:23:50.900 real 0m10.941s 00:23:50.900 user 0m7.749s 00:23:50.900 sys 0m5.652s 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:50.900 ************************************ 00:23:50.900 END TEST nvmf_identify 00:23:50.900 ************************************ 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.900 ************************************ 00:23:50.900 START TEST nvmf_perf 00:23:50.900 ************************************ 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:50.900 * Looking for test storage... 00:23:50.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.900 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.901 15:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.525 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:57.526 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:57.526 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:57.526 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:57.526 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:23:57.526 00:23:57.526 --- 10.0.0.2 ping statistics --- 00:23:57.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.526 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.411 ms 00:23:57.526 00:23:57.526 --- 10.0.0.1 ping statistics --- 00:23:57.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.526 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=346948 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 346948 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 346948 ']' 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.526 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.527 15:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.788 [2024-07-25 15:19:49.748063] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:57.788 [2024-07-25 15:19:49.748114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.788 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.788 [2024-07-25 15:19:49.814943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.788 [2024-07-25 15:19:49.880357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.788 [2024-07-25 15:19:49.880407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.788 [2024-07-25 15:19:49.880415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.788 [2024-07-25 15:19:49.880422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.788 [2024-07-25 15:19:49.880431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.788 [2024-07-25 15:19:49.884218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.788 [2024-07-25 15:19:49.884282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.788 [2024-07-25 15:19:49.884547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.788 [2024-07-25 15:19:49.884549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.361 15:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:58.361 15:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:58.361 15:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.361 15:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:58.361 15:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:58.361 15:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.623 15:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:58.623 15:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:58.884 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:58.884 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:59.146 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:59.146 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:59.405 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:59.405 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:59.405 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:59.405 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:59.405 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:59.405 [2024-07-25 15:19:51.513318] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.405 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:59.665 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:59.665 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:59.925 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:59.925 15:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:59.925 15:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.185 [2024-07-25 15:19:52.191789] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.185 15:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:00.446 15:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:00.446 15:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:00.446 15:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:00.446 15:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:01.843 Initializing NVMe Controllers 00:24:01.843 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:01.843 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:01.843 Initialization complete. Launching workers. 00:24:01.843 ======================================================== 00:24:01.843 Latency(us) 00:24:01.843 Device Information : IOPS MiB/s Average min max 00:24:01.843 PCIE (0000:65:00.0) NSID 1 from core 0: 79654.48 311.15 401.34 13.39 7992.28 00:24:01.843 ======================================================== 00:24:01.843 Total : 79654.48 311.15 401.34 13.39 7992.28 00:24:01.843 00:24:01.843 15:19:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:01.843 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.227 Initializing NVMe Controllers 00:24:03.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:03.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:03.227 Initialization complete. Launching workers. 00:24:03.227 ======================================================== 00:24:03.227 Latency(us) 00:24:03.227 Device Information : IOPS MiB/s Average min max 00:24:03.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 103.00 0.40 9908.35 571.55 45598.33 00:24:03.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 23033.93 7957.84 47910.12 00:24:03.227 ======================================================== 00:24:03.228 Total : 148.00 0.58 13899.23 571.55 47910.12 00:24:03.228 00:24:03.228 15:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:03.228 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.172 Initializing NVMe Controllers 00:24:04.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.172 Initialization complete. Launching workers. 00:24:04.172 ======================================================== 00:24:04.172 Latency(us) 00:24:04.172 Device Information : IOPS MiB/s Average min max 00:24:04.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8028.49 31.36 3988.15 766.53 8620.18 00:24:04.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3778.76 14.76 8515.04 6890.80 16249.74 00:24:04.172 ======================================================== 00:24:04.172 Total : 11807.26 46.12 5436.92 766.53 16249.74 00:24:04.172 00:24:04.433 15:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:04.433 15:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:04.433 15:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.433 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.974 Initializing NVMe Controllers 00:24:06.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.974 Controller IO queue size 128, less than required. 00:24:06.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.974 Controller IO queue size 128, less than required. 00:24:06.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:06.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:06.974 Initialization complete. Launching workers. 00:24:06.974 ======================================================== 00:24:06.974 Latency(us) 00:24:06.974 Device Information : IOPS MiB/s Average min max 00:24:06.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 965.97 241.49 136433.02 79003.99 184468.58 00:24:06.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 616.48 154.12 219250.88 56001.46 346097.41 00:24:06.974 ======================================================== 00:24:06.974 Total : 1582.46 395.61 168696.66 56001.46 346097.41 00:24:06.974 00:24:06.974 15:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:06.974 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.974 No valid NVMe controllers or AIO or URING devices found 00:24:06.974 Initializing NVMe Controllers 00:24:06.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.974 Controller IO queue size 128, less than required. 00:24:06.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.974 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:06.974 Controller IO queue size 128, less than required. 00:24:06.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.974 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:06.974 WARNING: Some requested NVMe devices were skipped 00:24:06.974 15:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:06.974 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.517 Initializing NVMe Controllers 00:24:09.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.517 Controller IO queue size 128, less than required. 00:24:09.517 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.517 Controller IO queue size 128, less than required. 00:24:09.517 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:09.517 Initialization complete. Launching workers. 00:24:09.517 00:24:09.517 ==================== 00:24:09.517 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:09.517 TCP transport: 00:24:09.517 polls: 45992 00:24:09.517 idle_polls: 16885 00:24:09.517 sock_completions: 29107 00:24:09.517 nvme_completions: 3709 00:24:09.517 submitted_requests: 5588 00:24:09.517 queued_requests: 1 00:24:09.517 00:24:09.517 ==================== 00:24:09.517 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:09.517 TCP transport: 00:24:09.517 polls: 44326 00:24:09.517 idle_polls: 15450 00:24:09.517 sock_completions: 28876 00:24:09.517 nvme_completions: 3905 00:24:09.517 submitted_requests: 5830 00:24:09.517 queued_requests: 1 00:24:09.517 ======================================================== 00:24:09.517 Latency(us) 00:24:09.517 Device Information : IOPS MiB/s Average min max 00:24:09.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 925.21 231.30 142590.14 79288.84 213527.10 00:24:09.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 974.12 243.53 133158.69 72459.51 187607.45 00:24:09.517 ======================================================== 00:24:09.517 Total : 1899.33 474.83 137752.99 72459.51 213527.10 00:24:09.517 00:24:09.517 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:09.517 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.777 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:09.777 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:09.777 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:09.777 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.777 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.778 rmmod nvme_tcp 00:24:09.778 rmmod nvme_fabrics 00:24:09.778 rmmod nvme_keyring 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 346948 ']' 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 346948 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 346948 ']' 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 346948 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 346948 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 346948' 00:24:09.778 killing process with pid 346948 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 346948 00:24:09.778 15:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 346948 00:24:12.320 15:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.320 15:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.320 15:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.320 15:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.320 15:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.320 15:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.320 15:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.320 15:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.236 00:24:14.236 real 0m23.384s 00:24:14.236 user 0m57.984s 00:24:14.236 sys 0m7.279s 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.236 ************************************ 00:24:14.236 END TEST nvmf_perf 00:24:14.236 ************************************ 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.236 ************************************ 00:24:14.236 START TEST nvmf_fio_host 00:24:14.236 ************************************ 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:14.236 * Looking for test storage... 00:24:14.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:14.236 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.237 15:20:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:20.873 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:20.873 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.873 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:20.874 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:20.874 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.874 15:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:24:21.135 00:24:21.135 --- 10.0.0.2 ping statistics --- 00:24:21.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.135 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.526 ms 00:24:21.135 00:24:21.135 --- 10.0.0.1 ping statistics --- 00:24:21.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.135 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.135 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.136 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.136 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.136 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=353833 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 353833 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 353833 ']' 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.397 15:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.397 [2024-07-25 15:20:13.392299] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:21.397 [2024-07-25 15:20:13.392368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.397 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.397 [2024-07-25 15:20:13.464299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.397 [2024-07-25 15:20:13.540294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.397 [2024-07-25 15:20:13.540338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.397 [2024-07-25 15:20:13.540346] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.397 [2024-07-25 15:20:13.540352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.397 [2024-07-25 15:20:13.540358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.397 [2024-07-25 15:20:13.540494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.397 [2024-07-25 15:20:13.540613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.397 [2024-07-25 15:20:13.540771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.397 [2024-07-25 15:20:13.540772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.339 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.339 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:22.339 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:22.339 [2024-07-25 15:20:14.313416] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.339 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:22.339 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.339 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.339 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:22.600 Malloc1 00:24:22.600 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:22.601 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:22.863 15:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.863 [2024-07-25 15:20:15.050934] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.125 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:23.125 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:23.125 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.125 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.125 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:23.125 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:23.125 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:23.125 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:23.126 15:20:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.728 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:23.728 fio-3.35 00:24:23.728 Starting 1 thread 00:24:23.728 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.273 00:24:26.273 test: (groupid=0, jobs=1): err= 0: pid=354527: Thu Jul 25 15:20:18 2024 00:24:26.273 read: IOPS=13.4k, BW=52.3MiB/s (54.9MB/s)(105MiB/2004msec) 00:24:26.273 slat (usec): min=2, max=299, avg= 2.16, stdev= 2.50 00:24:26.273 clat (usec): min=2969, max=11586, avg=5448.42, stdev=1050.71 00:24:26.273 lat (usec): min=2971, max=11588, avg=5450.58, stdev=1050.79 00:24:26.273 clat percentiles (usec): 00:24:26.273 | 1.00th=[ 3851], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4686], 00:24:26.273 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5342], 00:24:26.273 | 70.00th=[ 5604], 80.00th=[ 6063], 90.00th=[ 7046], 95.00th=[ 7570], 00:24:26.273 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[10683], 99.95th=[10945], 00:24:26.273 | 99.99th=[11600] 00:24:26.273 bw ( KiB/s): min=45696, max=56672, per=99.95%, avg=53564.00, stdev=5278.13, samples=4 00:24:26.273 iops : min=11424, max=14168, avg=13391.00, stdev=1319.53, samples=4 00:24:26.273 write: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(105MiB/2004msec); 0 zone resets 00:24:26.273 slat (usec): min=2, max=269, avg= 2.23, stdev= 1.81 00:24:26.273 clat (usec): min=1967, max=7947, avg=4062.74, stdev=768.34 00:24:26.273 lat (usec): min=1970, max=7949, avg=4064.98, stdev=768.47 00:24:26.273 clat percentiles (usec): 00:24:26.273 | 1.00th=[ 2606], 5.00th=[ 3032], 10.00th=[ 3228], 20.00th=[ 3523], 00:24:26.273 | 30.00th=[ 3720], 40.00th=[ 3851], 50.00th=[ 3982], 60.00th=[ 4080], 00:24:26.273 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 5211], 95.00th=[ 5800], 00:24:26.273 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7308], 99.95th=[ 7570], 00:24:26.273 | 99.99th=[ 7832] 00:24:26.273 bw ( KiB/s): min=46056, max=56368, per=99.98%, avg=53534.00, stdev=4993.24, samples=4 00:24:26.273 iops : min=11514, max=14092, avg=13383.50, stdev=1248.31, samples=4 00:24:26.273 lat (msec) : 2=0.01%, 4=27.15%, 10=72.74%, 20=0.11% 00:24:26.273 cpu : usr=66.85%, sys=25.56%, ctx=14, majf=0, minf=6 00:24:26.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:26.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.273 issued rwts: total=26849,26827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.273 00:24:26.273 Run status group 0 (all jobs): 00:24:26.273 READ: bw=52.3MiB/s (54.9MB/s), 52.3MiB/s-52.3MiB/s (54.9MB/s-54.9MB/s), io=105MiB (110MB), run=2004-2004msec 00:24:26.273 WRITE: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=105MiB (110MB), run=2004-2004msec 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:26.273 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.274 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:26.274 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:26.274 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:26.274 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:26.274 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:26.274 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:26.274 15:20:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:26.274 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:26.274 fio-3.35 00:24:26.274 Starting 1 thread 00:24:26.534 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.080 00:24:29.080 test: (groupid=0, jobs=1): err= 0: pid=355246: Thu Jul 25 15:20:20 2024 00:24:29.080 read: IOPS=7928, BW=124MiB/s (130MB/s)(249MiB/2006msec) 00:24:29.080 slat (usec): min=3, max=110, avg= 3.82, stdev= 1.90 00:24:29.080 clat (usec): min=2207, max=36224, avg=9590.46, stdev=2890.11 00:24:29.080 lat (usec): min=2210, max=36228, avg=9594.27, stdev=2890.43 00:24:29.080 clat percentiles (usec): 00:24:29.080 | 1.00th=[ 5014], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7373], 00:24:29.080 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:24:29.080 | 70.00th=[10552], 80.00th=[11600], 90.00th=[12649], 95.00th=[15008], 00:24:29.080 | 99.00th=[19006], 99.50th=[22938], 99.90th=[27395], 99.95th=[27657], 00:24:29.080 | 99.99th=[28181] 00:24:29.080 bw ( KiB/s): min=61600, max=75328, per=51.74%, avg=65640.00, stdev=6487.42, samples=4 00:24:29.080 iops : min= 3850, max= 4708, avg=4102.50, stdev=405.46, samples=4 00:24:29.080 write: IOPS=4616, BW=72.1MiB/s (75.6MB/s)(134MiB/1863msec); 0 zone resets 00:24:29.080 slat (usec): min=40, max=593, avg=42.41, stdev=12.22 00:24:29.080 clat (usec): min=2430, max=28832, avg=11314.16, stdev=2313.74 00:24:29.080 lat (usec): min=2470, max=28876, avg=11356.57, stdev=2317.44 00:24:29.080 clat percentiles (usec): 00:24:29.080 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9634], 00:24:29.080 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:24:29.080 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13698], 95.00th=[15008], 00:24:29.080 | 99.00th=[22152], 99.50th=[23725], 99.90th=[25035], 99.95th=[25035], 00:24:29.080 | 99.99th=[28705] 00:24:29.080 bw ( KiB/s): min=64192, max=78912, per=92.42%, avg=68264.00, stdev=7126.52, samples=4 00:24:29.080 iops : min= 4012, max= 4932, avg=4266.50, stdev=445.41, samples=4 00:24:29.080 lat (msec) : 4=0.11%, 10=50.69%, 20=48.23%, 50=0.98% 00:24:29.081 cpu : usr=79.60%, sys=15.66%, ctx=24, majf=0, minf=15 00:24:29.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:29.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:29.081 issued rwts: total=15905,8600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:29.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:29.081 00:24:29.081 Run status group 0 (all jobs): 00:24:29.081 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2006-2006msec 00:24:29.081 WRITE: bw=72.1MiB/s (75.6MB/s), 72.1MiB/s-72.1MiB/s (75.6MB/s-75.6MB/s), io=134MiB (141MB), run=1863-1863msec 00:24:29.081 15:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.081 rmmod nvme_tcp 00:24:29.081 rmmod nvme_fabrics 00:24:29.081 rmmod nvme_keyring 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 353833 ']' 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 353833 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 353833 ']' 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 353833 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 353833 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 353833' 00:24:29.081 killing process with pid 353833 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 353833 00:24:29.081 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 353833 00:24:29.342 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.342 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.342 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.342 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.342 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.342 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.342 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.342 15:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.258 15:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.258 00:24:31.258 real 0m17.270s 00:24:31.258 user 1m5.284s 00:24:31.258 sys 0m7.454s 00:24:31.258 15:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.258 15:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.258 ************************************ 00:24:31.258 END TEST nvmf_fio_host 00:24:31.258 ************************************ 00:24:31.258 15:20:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:31.258 15:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.258 15:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.258 15:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.258 ************************************ 00:24:31.258 START TEST nvmf_failover 00:24:31.258 ************************************ 00:24:31.258 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:31.519 * Looking for test storage... 00:24:31.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.519 15:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:39.668 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.669 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.669 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.669 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.669 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:24:39.669 00:24:39.669 --- 10.0.0.2 ping statistics --- 00:24:39.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.669 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:24:39.669 00:24:39.669 --- 10.0.0.1 ping statistics --- 00:24:39.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.669 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=359693 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 359693 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 359693 ']' 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.669 15:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.669 [2024-07-25 15:20:30.794051] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:39.669 [2024-07-25 15:20:30.794114] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.669 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.669 [2024-07-25 15:20:30.882823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:39.669 [2024-07-25 15:20:30.976742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.669 [2024-07-25 15:20:30.976801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.669 [2024-07-25 15:20:30.976809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.669 [2024-07-25 15:20:30.976816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.669 [2024-07-25 15:20:30.976823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.669 [2024-07-25 15:20:30.976958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.669 [2024-07-25 15:20:30.977128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.669 [2024-07-25 15:20:30.977129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.670 15:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.670 15:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:39.670 15:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:39.670 15:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.670 15:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.670 15:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.670 15:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:39.670 [2024-07-25 15:20:31.763008] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.670 15:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:39.931 Malloc0 00:24:39.931 15:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.192 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.192 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.452 [2024-07-25 15:20:32.447042] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.452 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:40.452 [2024-07-25 15:20:32.619495] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.714 [2024-07-25 15:20:32.792018] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=360188 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 360188 /var/tmp/bdevperf.sock 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 360188 ']' 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.714 15:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:41.656 15:20:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.656 15:20:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:41.656 15:20:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.916 NVMe0n1 00:24:41.916 15:20:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:42.206 00:24:42.206 15:20:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=360390 00:24:42.206 15:20:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:42.206 15:20:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:43.149 15:20:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.410 15:20:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:46.713 15:20:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.713 00:24:46.713 15:20:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:46.713 [2024-07-25 15:20:38.885365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.713 [2024-07-25 15:20:38.885683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.714 [2024-07-25 15:20:38.885846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06990 is same with the state(5) to be set 00:24:46.974 15:20:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:50.276 15:20:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.276 [2024-07-25 15:20:42.066151] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.276 15:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:51.220 15:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:51.220 [2024-07-25 15:20:43.238804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.238998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 [2024-07-25 15:20:43.239359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07870 is same with the state(5) to be set 00:24:51.220 15:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 360390 00:24:57.822 0 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 360188 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 360188 ']' 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 360188 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 360188 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 360188' 00:24:57.822 killing process with pid 360188 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 360188 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 360188 00:24:57.822 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.822 [2024-07-25 15:20:32.871423] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:57.822 [2024-07-25 15:20:32.871481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360188 ] 00:24:57.822 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.822 [2024-07-25 15:20:32.930924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.822 [2024-07-25 15:20:32.995048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.822 Running I/O for 15 seconds... 00:24:57.822 [2024-07-25 15:20:35.324483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.822 [2024-07-25 15:20:35.324862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.822 [2024-07-25 15:20:35.324870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.324880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.324887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.324897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.324904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.324915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.324924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.324934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.324946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.324955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.324964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.324974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.324981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.324990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.324997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.325014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.325031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.325048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.325064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.325080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.325096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.325113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.823 [2024-07-25 15:20:35.325390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.823 [2024-07-25 15:20:35.325517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.823 [2024-07-25 15:20:35.325525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.824 [2024-07-25 15:20:35.325677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.824 [2024-07-25 15:20:35.325694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.824 [2024-07-25 15:20:35.325710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.824 [2024-07-25 15:20:35.325727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.824 [2024-07-25 15:20:35.325744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.824 [2024-07-25 15:20:35.325760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.325987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.325995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.326013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.326029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.326046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.326063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.326079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.326096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.326113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.326130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.824 [2024-07-25 15:20:35.326149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.824 [2024-07-25 15:20:35.326158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.825 [2024-07-25 15:20:35.326165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.825 [2024-07-25 15:20:35.326689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe552c0 is same with the state(5) to be set 00:24:57.825 [2024-07-25 15:20:35.326706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.825 [2024-07-25 15:20:35.326712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.825 [2024-07-25 15:20:35.326719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96488 len:8 PRP1 0x0 PRP2 0x0 00:24:57.825 [2024-07-25 15:20:35.326728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326765] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe552c0 was disconnected and freed. reset controller. 00:24:57.825 [2024-07-25 15:20:35.326774] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:57.825 [2024-07-25 15:20:35.326796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.825 [2024-07-25 15:20:35.326804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.825 [2024-07-25 15:20:35.326820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.825 [2024-07-25 15:20:35.326835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.825 [2024-07-25 15:20:35.326843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.826 [2024-07-25 15:20:35.326850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:35.326858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-07-25 15:20:35.330439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.826 [2024-07-25 15:20:35.330465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe58ef0 (9): Bad file descriptor 00:24:57.826 [2024-07-25 15:20:35.489264] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.826 [2024-07-25 15:20:38.886929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.886966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.886982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.886995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.826 [2024-07-25 15:20:38.887479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.826 [2024-07-25 15:20:38.887487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.887987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.887997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.888004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.888013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.888021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.888030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.888037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.888047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.888057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.827 [2024-07-25 15:20:38.888067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.827 [2024-07-25 15:20:38.888074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.828 [2024-07-25 15:20:38.888609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.828 [2024-07-25 15:20:38.888704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.828 [2024-07-25 15:20:38.888711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:38.888727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:38.888744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:38.888761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:38.888778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:38.888795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:38.888811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:38.888827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:38.888844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.888860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.888877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.888894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.888911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.888929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.888945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.888961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.888978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.888987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.888994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.889010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.889027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.889043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.889059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.889075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.829 [2024-07-25 15:20:38.889091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.829 [2024-07-25 15:20:38.889118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.829 [2024-07-25 15:20:38.889125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69888 len:8 PRP1 0x0 PRP2 0x0 00:24:57.829 [2024-07-25 15:20:38.889133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889170] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe87c80 was disconnected and freed. reset controller. 00:24:57.829 [2024-07-25 15:20:38.889180] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:57.829 [2024-07-25 15:20:38.889199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.829 [2024-07-25 15:20:38.889219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.829 [2024-07-25 15:20:38.889243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.829 [2024-07-25 15:20:38.889260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.829 [2024-07-25 15:20:38.889275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:38.889283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-07-25 15:20:38.889316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe58ef0 (9): Bad file descriptor 00:24:57.829 [2024-07-25 15:20:38.892895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-07-25 15:20:39.016296] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.829 [2024-07-25 15:20:43.240212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:43.240251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:43.240268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:43.240276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:43.240287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:43.240294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:43.240304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:43.240312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:43.240321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:43.240328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:43.240343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:43.240351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.829 [2024-07-25 15:20:43.240361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.829 [2024-07-25 15:20:43.240368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.830 [2024-07-25 15:20:43.240983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.830 [2024-07-25 15:20:43.240990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.240999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.831 [2024-07-25 15:20:43.241007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.831 [2024-07-25 15:20:43.241023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.831 [2024-07-25 15:20:43.241524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.831 [2024-07-25 15:20:43.241533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.241985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.241992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.832 [2024-07-25 15:20:43.242160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.832 [2024-07-25 15:20:43.242169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.833 [2024-07-25 15:20:43.242378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.833 [2024-07-25 15:20:43.242405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.833 [2024-07-25 15:20:43.242412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108456 len:8 PRP1 0x0 PRP2 0x0 00:24:57.833 [2024-07-25 15:20:43.242422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242462] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe878b0 was disconnected and freed. reset controller. 00:24:57.833 [2024-07-25 15:20:43.242472] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:57.833 [2024-07-25 15:20:43.242494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.833 [2024-07-25 15:20:43.242503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.833 [2024-07-25 15:20:43.242521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.833 [2024-07-25 15:20:43.242537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.833 [2024-07-25 15:20:43.242552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.833 [2024-07-25 15:20:43.242560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-07-25 15:20:43.246084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-07-25 15:20:43.246112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe58ef0 (9): Bad file descriptor 00:24:57.833 [2024-07-25 15:20:43.291266] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.833 00:24:57.833 Latency(us) 00:24:57.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.833 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:57.833 Verification LBA range: start 0x0 length 0x4000 00:24:57.833 NVMe0n1 : 15.01 11503.23 44.93 822.72 0.00 10356.86 1064.96 15073.28 00:24:57.833 =================================================================================================================== 00:24:57.833 Total : 11503.23 44.93 822.72 0.00 10356.86 1064.96 15073.28 00:24:57.833 Received shutdown signal, test time was about 15.000000 seconds 00:24:57.833 00:24:57.833 Latency(us) 00:24:57.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.833 =================================================================================================================== 00:24:57.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=363400 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 363400 /var/tmp/bdevperf.sock 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 363400 ']' 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.833 15:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.405 15:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.406 15:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:58.406 15:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:58.406 [2024-07-25 15:20:50.487269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.406 15:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:58.666 [2024-07-25 15:20:50.659662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:58.666 15:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:58.926 NVMe0n1 00:24:58.926 15:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.186 00:24:59.186 15:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.758 00:24:59.758 15:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:59.758 15:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:59.758 15:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:00.019 15:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:03.334 15:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.334 15:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:03.334 15:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=364450 00:25:03.334 15:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 364450 00:25:03.334 15:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:04.277 0 00:25:04.277 15:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:04.277 [2024-07-25 15:20:49.571632] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:04.277 [2024-07-25 15:20:49.571689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363400 ] 00:25:04.277 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.277 [2024-07-25 15:20:49.630198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.277 [2024-07-25 15:20:49.692690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.277 [2024-07-25 15:20:52.041929] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:04.277 [2024-07-25 15:20:52.041977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.277 [2024-07-25 15:20:52.041989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.277 [2024-07-25 15:20:52.041998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.277 [2024-07-25 15:20:52.042005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.277 [2024-07-25 15:20:52.042013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.277 [2024-07-25 15:20:52.042020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.277 [2024-07-25 15:20:52.042028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.277 [2024-07-25 15:20:52.042035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.277 [2024-07-25 15:20:52.042042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.277 [2024-07-25 15:20:52.042076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbef0 (9): Bad file descriptor 00:25:04.277 [2024-07-25 15:20:52.042091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.277 [2024-07-25 15:20:52.053246] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:04.277 Running I/O for 1 seconds... 00:25:04.277 00:25:04.277 Latency(us) 00:25:04.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.277 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:04.277 Verification LBA range: start 0x0 length 0x4000 00:25:04.277 NVMe0n1 : 1.01 11614.88 45.37 0.00 0.00 10967.24 2635.09 18786.99 00:25:04.277 =================================================================================================================== 00:25:04.277 Total : 11614.88 45.37 0.00 0.00 10967.24 2635.09 18786.99 00:25:04.277 15:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:04.277 15:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:04.536 15:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.536 15:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:04.536 15:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:04.819 15:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.121 15:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 363400 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 363400 ']' 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 363400 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 363400 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 363400' 00:25:08.425 killing process with pid 363400 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 363400 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 363400 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.425 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:08.426 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.426 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:08.426 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.426 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.426 rmmod nvme_tcp 00:25:08.687 rmmod nvme_fabrics 00:25:08.687 rmmod nvme_keyring 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 359693 ']' 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 359693 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 359693 ']' 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 359693 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 359693 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 359693' 00:25:08.687 killing process with pid 359693 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 359693 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 359693 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.687 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.238 15:21:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.238 00:25:11.238 real 0m39.496s 00:25:11.238 user 2m2.276s 00:25:11.238 sys 0m7.991s 00:25:11.238 15:21:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.238 15:21:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.238 ************************************ 00:25:11.238 END TEST nvmf_failover 00:25:11.238 ************************************ 00:25:11.238 15:21:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:11.238 15:21:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:11.238 15:21:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:11.238 15:21:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.238 ************************************ 00:25:11.238 START TEST nvmf_host_discovery 00:25:11.238 ************************************ 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:11.238 * Looking for test storage... 00:25:11.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.238 15:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:17.855 15:21:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:17.855 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:17.855 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:17.855 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:17.856 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:17.856 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.856 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.118 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.118 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.118 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.118 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.118 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.118 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:25:18.380 00:25:18.380 --- 10.0.0.2 ping statistics --- 00:25:18.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.380 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:25:18.380 00:25:18.380 --- 10.0.0.1 ping statistics --- 00:25:18.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.380 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=370152 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 370152 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 370152 ']' 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:18.380 15:21:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.380 [2024-07-25 15:21:10.417974] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:18.380 [2024-07-25 15:21:10.418036] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.380 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.380 [2024-07-25 15:21:10.510506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.642 [2024-07-25 15:21:10.605860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.642 [2024-07-25 15:21:10.605923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.642 [2024-07-25 15:21:10.605932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.642 [2024-07-25 15:21:10.605940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.642 [2024-07-25 15:21:10.605947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.642 [2024-07-25 15:21:10.605982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.216 [2024-07-25 15:21:11.261742] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.216 [2024-07-25 15:21:11.274029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.216 null0 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.216 null1 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=370338 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 370338 /tmp/host.sock 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 370338 ']' 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:19.216 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:19.216 15:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.216 [2024-07-25 15:21:11.370453] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:19.216 [2024-07-25 15:21:11.370519] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370338 ] 00:25:19.216 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.476 [2024-07-25 15:21:11.434257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.476 [2024-07-25 15:21:11.511259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.048 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.310 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.571 [2024-07-25 15:21:12.521133] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:20.571 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:20.572 15:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:21.144 [2024-07-25 15:21:13.212568] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:21.144 [2024-07-25 15:21:13.212589] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:21.145 [2024-07-25 15:21:13.212604] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:21.405 [2024-07-25 15:21:13.342030] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:21.405 [2024-07-25 15:21:13.442923] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:21.405 [2024-07-25 15:21:13.442946] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.666 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.928 15:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.190 [2024-07-25 15:21:14.305929] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.190 [2024-07-25 15:21:14.306952] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:22.190 [2024-07-25 15:21:14.306979] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.190 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.451 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.451 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.451 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:22.451 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:22.451 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:22.451 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.452 [2024-07-25 15:21:14.437817] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:22.452 15:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:22.713 [2024-07-25 15:21:14.704388] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:22.713 [2024-07-25 15:21:14.704410] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:22.713 [2024-07-25 15:21:14.704416] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:23.286 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:23.286 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:23.286 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:23.286 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:23.286 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:23.286 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:23.286 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.286 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.286 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.549 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.549 [2024-07-25 15:21:15.577703] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:23.549 [2024-07-25 15:21:15.577725] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.549 [2024-07-25 15:21:15.578929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.549 [2024-07-25 15:21:15.578948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.549 [2024-07-25 15:21:15.578957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.549 [2024-07-25 15:21:15.578964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.549 [2024-07-25 15:21:15.578972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.549 [2024-07-25 15:21:15.578980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.550 [2024-07-25 15:21:15.578987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.550 [2024-07-25 15:21:15.578994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.550 [2024-07-25 15:21:15.579002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd89d0 is same with the state(5) to be set 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.550 [2024-07-25 15:21:15.588941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd89d0 (9): Bad file descriptor 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:23.550 [2024-07-25 15:21:15.598980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.550 [2024-07-25 15:21:15.599562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.550 [2024-07-25 15:21:15.599600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd89d0 with addr=10.0.0.2, port=4420 00:25:23.550 [2024-07-25 15:21:15.599611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd89d0 is same with the state(5) to be set 00:25:23.550 [2024-07-25 15:21:15.599630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd89d0 (9): Bad file descriptor 00:25:23.550 [2024-07-25 15:21:15.599653] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.550 [2024-07-25 15:21:15.599661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.550 [2024-07-25 15:21:15.599669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.550 [2024-07-25 15:21:15.599685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.550 [2024-07-25 15:21:15.609039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.550 [2024-07-25 15:21:15.609629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.550 [2024-07-25 15:21:15.609667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd89d0 with addr=10.0.0.2, port=4420 00:25:23.550 [2024-07-25 15:21:15.609678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd89d0 is same with the state(5) to be set 00:25:23.550 [2024-07-25 15:21:15.609697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd89d0 (9): Bad file descriptor 00:25:23.550 [2024-07-25 15:21:15.609709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.550 [2024-07-25 15:21:15.609716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.550 [2024-07-25 15:21:15.609723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.550 [2024-07-25 15:21:15.609738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.550 [2024-07-25 15:21:15.619096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.550 [2024-07-25 15:21:15.619626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.550 [2024-07-25 15:21:15.619669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd89d0 with addr=10.0.0.2, port=4420 00:25:23.550 [2024-07-25 15:21:15.619681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd89d0 is same with the state(5) to be set 00:25:23.550 [2024-07-25 15:21:15.619701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd89d0 (9): Bad file descriptor 00:25:23.550 [2024-07-25 15:21:15.619729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.550 [2024-07-25 15:21:15.619737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.550 [2024-07-25 15:21:15.619745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.550 [2024-07-25 15:21:15.619760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.550 [2024-07-25 15:21:15.629153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.550 [2024-07-25 15:21:15.630628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.550 [2024-07-25 15:21:15.630649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd89d0 with addr=10.0.0.2, port=4420 00:25:23.550 [2024-07-25 15:21:15.630657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd89d0 is same with the state(5) to be set 00:25:23.550 [2024-07-25 15:21:15.630671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd89d0 (9): Bad file descriptor 00:25:23.550 [2024-07-25 15:21:15.630689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.550 [2024-07-25 15:21:15.630696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.550 [2024-07-25 15:21:15.630704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.550 [2024-07-25 15:21:15.630716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:23.550 [2024-07-25 15:21:15.639216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.550 [2024-07-25 15:21:15.639702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.550 [2024-07-25 15:21:15.639715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd89d0 with addr=10.0.0.2, port=4420 00:25:23.550 [2024-07-25 15:21:15.639722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd89d0 is same with the state(5) to be set 00:25:23.550 [2024-07-25 15:21:15.639733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd89d0 (9): Bad file descriptor 00:25:23.550 [2024-07-25 15:21:15.639751] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.550 [2024-07-25 15:21:15.639758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.550 [2024-07-25 15:21:15.639768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.550 [2024-07-25 15:21:15.639785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.550 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.550 [2024-07-25 15:21:15.649272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.550 [2024-07-25 15:21:15.649783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.550 [2024-07-25 15:21:15.649796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd89d0 with addr=10.0.0.2, port=4420 00:25:23.550 [2024-07-25 15:21:15.649804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd89d0 is same with the state(5) to be set 00:25:23.550 [2024-07-25 15:21:15.649815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd89d0 (9): Bad file descriptor 00:25:23.550 [2024-07-25 15:21:15.649831] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.550 [2024-07-25 15:21:15.649838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.550 [2024-07-25 15:21:15.649845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.550 [2024-07-25 15:21:15.649856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.550 [2024-07-25 15:21:15.659327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.550 [2024-07-25 15:21:15.659785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.550 [2024-07-25 15:21:15.659798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd89d0 with addr=10.0.0.2, port=4420 00:25:23.550 [2024-07-25 15:21:15.659805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd89d0 is same with the state(5) to be set 00:25:23.550 [2024-07-25 15:21:15.659816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd89d0 (9): Bad file descriptor 00:25:23.551 [2024-07-25 15:21:15.659831] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.551 [2024-07-25 15:21:15.659838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.551 [2024-07-25 15:21:15.659845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.551 [2024-07-25 15:21:15.659855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.551 [2024-07-25 15:21:15.665896] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:23.551 [2024-07-25 15:21:15.665913] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:23.551 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:23.813 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.814 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.814 15:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.200 [2024-07-25 15:21:17.031479] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:25.200 [2024-07-25 15:21:17.031505] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:25.200 [2024-07-25 15:21:17.031519] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:25.200 [2024-07-25 15:21:17.119790] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:25.200 [2024-07-25 15:21:17.226690] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:25.200 [2024-07-25 15:21:17.226719] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.200 request: 00:25:25.200 { 00:25:25.200 "name": "nvme", 00:25:25.200 "trtype": "tcp", 00:25:25.200 "traddr": "10.0.0.2", 00:25:25.200 "adrfam": "ipv4", 00:25:25.200 "trsvcid": "8009", 00:25:25.200 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:25.200 "wait_for_attach": true, 00:25:25.200 "method": "bdev_nvme_start_discovery", 00:25:25.200 "req_id": 1 00:25:25.200 } 00:25:25.200 Got JSON-RPC error response 00:25:25.200 response: 00:25:25.200 { 00:25:25.200 "code": -17, 00:25:25.200 "message": "File exists" 00:25:25.200 } 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.200 request: 00:25:25.200 { 00:25:25.200 "name": "nvme_second", 00:25:25.200 "trtype": "tcp", 00:25:25.200 "traddr": "10.0.0.2", 00:25:25.200 "adrfam": "ipv4", 00:25:25.200 "trsvcid": "8009", 00:25:25.200 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:25.200 "wait_for_attach": true, 00:25:25.200 "method": "bdev_nvme_start_discovery", 00:25:25.200 "req_id": 1 00:25:25.200 } 00:25:25.200 Got JSON-RPC error response 00:25:25.200 response: 00:25:25.200 { 00:25:25.200 "code": -17, 00:25:25.200 "message": "File exists" 00:25:25.200 } 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:25.200 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.461 15:21:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.439 [2024-07-25 15:21:18.494758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.439 [2024-07-25 15:21:18.494787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5530 with addr=10.0.0.2, port=8010 00:25:26.439 [2024-07-25 15:21:18.494800] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:26.439 [2024-07-25 15:21:18.494808] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:26.439 [2024-07-25 15:21:18.494816] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:27.382 [2024-07-25 15:21:19.497161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.382 [2024-07-25 15:21:19.497186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5530 with addr=10.0.0.2, port=8010 00:25:27.382 [2024-07-25 15:21:19.497197] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:27.382 [2024-07-25 15:21:19.497207] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:27.382 [2024-07-25 15:21:19.497214] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:28.324 [2024-07-25 15:21:20.499034] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:28.324 request: 00:25:28.324 { 00:25:28.324 "name": "nvme_second", 00:25:28.324 "trtype": "tcp", 00:25:28.324 "traddr": "10.0.0.2", 00:25:28.324 "adrfam": "ipv4", 00:25:28.324 "trsvcid": "8010", 00:25:28.324 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:28.324 "wait_for_attach": false, 00:25:28.324 "attach_timeout_ms": 3000, 00:25:28.324 "method": "bdev_nvme_start_discovery", 00:25:28.324 "req_id": 1 00:25:28.324 } 00:25:28.324 Got JSON-RPC error response 00:25:28.324 response: 00:25:28.324 { 00:25:28.324 "code": -110, 00:25:28.324 "message": "Connection timed out" 00:25:28.324 } 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:28.324 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 370338 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.586 rmmod nvme_tcp 00:25:28.586 rmmod nvme_fabrics 00:25:28.586 rmmod nvme_keyring 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 370152 ']' 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 370152 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 370152 ']' 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 370152 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 370152 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 370152' 00:25:28.586 killing process with pid 370152 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 370152 00:25:28.586 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 370152 00:25:28.848 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.848 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.848 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.848 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.848 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.848 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.848 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.848 15:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.765 15:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.765 00:25:30.765 real 0m19.871s 00:25:30.765 user 0m23.354s 00:25:30.765 sys 0m6.896s 00:25:30.765 15:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:30.765 15:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.765 ************************************ 00:25:30.765 END TEST nvmf_host_discovery 00:25:30.765 ************************************ 00:25:30.765 15:21:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:30.765 15:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:30.765 15:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:30.765 15:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.026 ************************************ 00:25:31.026 START TEST nvmf_host_multipath_status 00:25:31.026 ************************************ 00:25:31.026 15:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:31.026 * Looking for test storage... 00:25:31.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.026 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.026 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:31.026 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.026 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.027 15:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.618 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:37.619 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:37.619 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:37.619 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:37.619 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.619 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.882 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.882 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.882 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:37.882 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.882 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.882 15:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:37.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:25:37.882 00:25:37.882 --- 10.0.0.2 ping statistics --- 00:25:37.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.882 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:25:37.882 00:25:37.882 --- 10.0.0.1 ping statistics --- 00:25:37.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.882 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.882 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.143 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=376525 00:25:38.143 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 376525 00:25:38.143 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:38.143 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 376525 ']' 00:25:38.143 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.143 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.143 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.143 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.143 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.143 [2024-07-25 15:21:30.130228] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:38.144 [2024-07-25 15:21:30.130298] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.144 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.144 [2024-07-25 15:21:30.201652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:38.144 [2024-07-25 15:21:30.275651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.144 [2024-07-25 15:21:30.275691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.144 [2024-07-25 15:21:30.275699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.144 [2024-07-25 15:21:30.275706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.144 [2024-07-25 15:21:30.275712] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.144 [2024-07-25 15:21:30.279220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.144 [2024-07-25 15:21:30.279239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.087 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:39.087 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:39.087 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:39.087 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:39.087 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.087 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.087 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=376525 00:25:39.087 15:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:39.087 [2024-07-25 15:21:31.099382] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.087 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:39.087 Malloc0 00:25:39.348 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:39.348 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:39.609 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.609 [2024-07-25 15:21:31.721632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.609 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:39.869 [2024-07-25 15:21:31.877987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:39.869 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=376889 00:25:39.869 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:39.869 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:39.869 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 376889 /var/tmp/bdevperf.sock 00:25:39.869 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 376889 ']' 00:25:39.869 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.869 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.869 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.870 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.870 15:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:40.813 15:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.813 15:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:40.813 15:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:40.813 15:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:41.074 Nvme0n1 00:25:41.074 15:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:41.646 Nvme0n1 00:25:41.646 15:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:41.646 15:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:43.562 15:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:43.562 15:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:43.823 15:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:43.823 15:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:45.208 15:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:45.209 15:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:45.209 15:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.209 15:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.209 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.209 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:45.209 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.209 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.209 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.209 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.209 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.209 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.470 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.470 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.470 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.470 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.731 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.731 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.731 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.731 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.731 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.731 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.731 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.731 15:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.993 15:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.993 15:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:45.993 15:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:45.993 15:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.253 15:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:47.200 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:47.200 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:47.200 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.200 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.462 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.462 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:47.462 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.462 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.723 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.723 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.723 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.723 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.723 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.723 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.723 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.723 15:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.984 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.984 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.984 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.984 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.245 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.245 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.245 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.245 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.245 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.245 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:48.245 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.507 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:48.768 15:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:49.711 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:49.711 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:49.711 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.711 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.711 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.711 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:49.711 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.711 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.972 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.972 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.972 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.972 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.233 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.233 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.233 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.233 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.233 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.233 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.234 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.234 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.495 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.495 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.495 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.495 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.756 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.756 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:50.756 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:50.756 15:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:51.017 15:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:51.959 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:51.959 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:51.959 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.959 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.220 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.220 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:52.220 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.220 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.481 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.481 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.481 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.481 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.481 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.481 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.481 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.481 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.742 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.742 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:52.742 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.742 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.003 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.003 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:53.003 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.004 15:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.004 15:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.004 15:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:53.004 15:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:53.265 15:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:53.265 15:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.650 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.911 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.911 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.911 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.911 15:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.911 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.911 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:54.911 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.911 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:55.172 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.172 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:55.172 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.172 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.433 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.433 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:55.433 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:55.433 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.694 15:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:56.635 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:56.635 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:56.635 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.635 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.896 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.896 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:56.896 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.896 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.896 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.896 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.896 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.157 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.157 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.157 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.157 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.157 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.418 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.418 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:57.418 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.418 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.418 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.418 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.418 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.418 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.679 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.679 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:57.939 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:57.939 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:57.939 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:58.200 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:59.144 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:59.144 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:59.144 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.144 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.405 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.405 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:59.405 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.405 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.666 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.666 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.666 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.666 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.666 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.666 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.666 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.666 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.927 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.927 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.927 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.927 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.189 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.189 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:00.189 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.189 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.189 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.189 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:00.189 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:00.450 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:00.450 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.843 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.145 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.145 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.145 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.145 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.145 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.145 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.145 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.145 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.406 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.406 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:02.406 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.406 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.667 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.667 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:02.667 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:02.667 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:02.929 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:03.871 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:03.872 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:03.872 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.872 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.133 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.133 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:04.133 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.133 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:04.393 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.393 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:04.393 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.393 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:04.393 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.393 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:04.393 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.393 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:04.653 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.653 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:04.653 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.653 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.913 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.913 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:04.913 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.913 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.913 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.913 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:04.913 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:05.173 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:05.433 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:06.375 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:06.376 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:06.376 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.376 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.376 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.376 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:06.376 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.376 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.637 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.637 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.637 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.637 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.897 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.897 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.897 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.897 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.897 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.897 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:06.897 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.897 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.157 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.157 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:07.157 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.157 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 376889 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 376889 ']' 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 376889 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 376889 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 376889' 00:26:07.420 killing process with pid 376889 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 376889 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 376889 00:26:07.420 Connection closed with partial response: 00:26:07.420 00:26:07.420 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 376889 00:26:07.420 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:07.420 [2024-07-25 15:21:31.940085] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:07.420 [2024-07-25 15:21:31.940144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376889 ] 00:26:07.420 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.420 [2024-07-25 15:21:31.989753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.420 [2024-07-25 15:21:32.041637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.420 Running I/O for 90 seconds... 00:26:07.420 [2024-07-25 15:21:45.256020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.256889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.420 [2024-07-25 15:21:45.256906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.420 [2024-07-25 15:21:45.256923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.420 [2024-07-25 15:21:45.256941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.420 [2024-07-25 15:21:45.256957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.420 [2024-07-25 15:21:45.256974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.256985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.420 [2024-07-25 15:21:45.256991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.257005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.420 [2024-07-25 15:21:45.257011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.257022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.420 [2024-07-25 15:21:45.257027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.257038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.257043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.257054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.420 [2024-07-25 15:21:45.257060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:07.420 [2024-07-25 15:21:45.257072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.257077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.257088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.257093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.257105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.257110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.257121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.257127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.257138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.257143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-07-25 15:21:45.258613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.421 [2024-07-25 15:21:45.258632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.421 [2024-07-25 15:21:45.258650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.421 [2024-07-25 15:21:45.258671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.421 [2024-07-25 15:21:45.258690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.421 [2024-07-25 15:21:45.258708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.421 [2024-07-25 15:21:45.258727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.421 [2024-07-25 15:21:45.258745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.421 [2024-07-25 15:21:45.258759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.258984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.258998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.259004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.259024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.259044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.259063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.259083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.422 [2024-07-25 15:21:45.259589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.422 [2024-07-25 15:21:45.259697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:07.422 [2024-07-25 15:21:45.259713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.259719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.259740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.259761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:45.259783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:45.259804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:45.259826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:45.259847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:45.259870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:45.259891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:45.259913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:45.259965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.259983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.259989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:45.260326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:45.260331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.358721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:57.358758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.358790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:57.358797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.359546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.423 [2024-07-25 15:21:57.359559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.359572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:57.359578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.359588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:57.359594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.359608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:57.359614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.359625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:57.359631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.359642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:57.359647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.360043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:57.360053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.360064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.423 [2024-07-25 15:21:57.360070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:07.423 [2024-07-25 15:21:57.360080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.424 [2024-07-25 15:21:57.360086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.424 [2024-07-25 15:21:57.360101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.424 [2024-07-25 15:21:57.360118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.424 [2024-07-25 15:21:57.360133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.424 [2024-07-25 15:21:57.360150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.424 [2024-07-25 15:21:57.360168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.424 [2024-07-25 15:21:57.360184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.424 [2024-07-25 15:21:57.360206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.424 [2024-07-25 15:21:57.360223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.424 [2024-07-25 15:21:57.360239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.424 [2024-07-25 15:21:57.360255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.424 [2024-07-25 15:21:57.360271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.424 [2024-07-25 15:21:57.360282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.424 [2024-07-25 15:21:57.360289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:07.424 Received shutdown signal, test time was about 25.663903 seconds 00:26:07.424 00:26:07.424 Latency(us) 00:26:07.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.424 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:07.424 Verification LBA range: start 0x0 length 0x4000 00:26:07.424 Nvme0n1 : 25.66 11140.17 43.52 0.00 0.00 11472.04 264.53 3019898.88 00:26:07.424 =================================================================================================================== 00:26:07.424 Total : 11140.17 43.52 0.00 0.00 11472.04 264.53 3019898.88 00:26:07.424 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:07.684 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:07.684 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:07.685 rmmod nvme_tcp 00:26:07.685 rmmod nvme_fabrics 00:26:07.685 rmmod nvme_keyring 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 376525 ']' 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 376525 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 376525 ']' 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 376525 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 376525 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 376525' 00:26:07.685 killing process with pid 376525 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 376525 00:26:07.685 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 376525 00:26:07.945 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:07.945 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:07.945 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:07.945 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.945 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:07.945 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.945 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.945 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.493 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:10.493 00:26:10.493 real 0m39.106s 00:26:10.493 user 1m41.353s 00:26:10.493 sys 0m10.592s 00:26:10.493 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:10.493 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:10.493 ************************************ 00:26:10.493 END TEST nvmf_host_multipath_status 00:26:10.493 ************************************ 00:26:10.493 15:22:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:10.493 15:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:10.493 15:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:10.493 15:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.493 ************************************ 00:26:10.494 START TEST nvmf_discovery_remove_ifc 00:26:10.494 ************************************ 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:10.494 * Looking for test storage... 00:26:10.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:10.494 15:22:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.085 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:17.086 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:17.086 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:17.086 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:17.086 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.086 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:17.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:26:17.347 00:26:17.347 --- 10.0.0.2 ping statistics --- 00:26:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.347 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:26:17.347 00:26:17.347 --- 10.0.0.1 ping statistics --- 00:26:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.347 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:17.347 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=386432 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 386432 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 386432 ']' 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:17.348 15:22:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.348 [2024-07-25 15:22:09.429376] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:17.348 [2024-07-25 15:22:09.429428] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.348 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.348 [2024-07-25 15:22:09.512666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.609 [2024-07-25 15:22:09.584731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.609 [2024-07-25 15:22:09.584782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.609 [2024-07-25 15:22:09.584790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.609 [2024-07-25 15:22:09.584797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.609 [2024-07-25 15:22:09.584802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.609 [2024-07-25 15:22:09.584824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.182 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.182 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:18.182 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.182 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:18.182 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.182 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.182 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:18.182 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.182 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.183 [2024-07-25 15:22:10.287598] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.183 [2024-07-25 15:22:10.295814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:18.183 null0 00:26:18.183 [2024-07-25 15:22:10.327786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=386719 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 386719 /tmp/host.sock 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 386719 ']' 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:18.183 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.183 15:22:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.443 [2024-07-25 15:22:10.403371] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:18.443 [2024-07-25 15:22:10.403432] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386719 ] 00:26:18.443 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.443 [2024-07-25 15:22:10.467026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.443 [2024-07-25 15:22:10.541605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.016 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.276 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.276 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:19.276 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.276 15:22:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.218 [2024-07-25 15:22:12.307172] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:20.218 [2024-07-25 15:22:12.307193] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:20.218 [2024-07-25 15:22:12.307210] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:20.479 [2024-07-25 15:22:12.436647] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:20.479 [2024-07-25 15:22:12.662871] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:20.479 [2024-07-25 15:22:12.662922] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:20.479 [2024-07-25 15:22:12.662945] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:20.479 [2024-07-25 15:22:12.662959] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:20.479 [2024-07-25 15:22:12.662980] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:20.479 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.479 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:20.479 [2024-07-25 15:22:12.666544] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10527f0 was disconnected and freed. delete nvme_qpair. 00:26:20.479 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.762 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.763 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.763 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.763 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:20.763 15:22:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.705 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.705 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.705 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.705 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.705 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.705 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.705 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.966 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.966 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:21.966 15:22:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:22.909 15:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.852 15:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.852 15:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.852 15:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.852 15:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.852 15:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.852 15:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.852 15:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.852 15:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.112 15:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:24.112 15:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:25.055 15:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:26.000 [2024-07-25 15:22:18.103314] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:26.000 [2024-07-25 15:22:18.103356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.000 [2024-07-25 15:22:18.103369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.000 [2024-07-25 15:22:18.103379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.000 [2024-07-25 15:22:18.103386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.000 [2024-07-25 15:22:18.103395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.000 [2024-07-25 15:22:18.103402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.000 [2024-07-25 15:22:18.103410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.000 [2024-07-25 15:22:18.103417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.000 [2024-07-25 15:22:18.103425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.000 [2024-07-25 15:22:18.103433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.000 [2024-07-25 15:22:18.103440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1019060 is same with the state(5) to be set 00:26:26.000 [2024-07-25 15:22:18.113335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1019060 (9): Bad file descriptor 00:26:26.000 15:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.000 15:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.000 15:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.000 15:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.000 15:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.000 15:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.000 15:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.000 [2024-07-25 15:22:18.123374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.387 [2024-07-25 15:22:19.153246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:27.387 [2024-07-25 15:22:19.153292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1019060 with addr=10.0.0.2, port=4420 00:26:27.387 [2024-07-25 15:22:19.153307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1019060 is same with the state(5) to be set 00:26:27.387 [2024-07-25 15:22:19.153340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1019060 (9): Bad file descriptor 00:26:27.387 [2024-07-25 15:22:19.153402] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:27.387 [2024-07-25 15:22:19.153426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.387 [2024-07-25 15:22:19.153435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.387 [2024-07-25 15:22:19.153445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.387 [2024-07-25 15:22:19.153464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.387 [2024-07-25 15:22:19.153474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.387 15:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.387 15:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:27.387 15:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:28.331 [2024-07-25 15:22:20.155858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:28.331 [2024-07-25 15:22:20.155889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:28.331 [2024-07-25 15:22:20.155897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:28.331 [2024-07-25 15:22:20.155906] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:28.331 [2024-07-25 15:22:20.155922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.331 [2024-07-25 15:22:20.155943] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:28.331 [2024-07-25 15:22:20.155970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.331 [2024-07-25 15:22:20.155981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.331 [2024-07-25 15:22:20.155993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.331 [2024-07-25 15:22:20.156001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.331 [2024-07-25 15:22:20.156009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.331 [2024-07-25 15:22:20.156023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.331 [2024-07-25 15:22:20.156032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.331 [2024-07-25 15:22:20.156039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.331 [2024-07-25 15:22:20.156047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.331 [2024-07-25 15:22:20.156054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.331 [2024-07-25 15:22:20.156062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:28.331 [2024-07-25 15:22:20.156557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10184c0 (9): Bad file descriptor 00:26:28.331 [2024-07-25 15:22:20.157569] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:28.331 [2024-07-25 15:22:20.157579] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:28.331 15:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:29.275 15:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.219 [2024-07-25 15:22:22.171693] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:30.219 [2024-07-25 15:22:22.171714] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:30.219 [2024-07-25 15:22:22.171729] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:30.219 [2024-07-25 15:22:22.302126] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:30.219 [2024-07-25 15:22:22.403322] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:30.219 [2024-07-25 15:22:22.403364] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:30.219 [2024-07-25 15:22:22.403385] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:30.219 [2024-07-25 15:22:22.403399] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:30.219 [2024-07-25 15:22:22.403407] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:30.480 [2024-07-25 15:22:22.410487] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x101fe50 was disconnected and freed. delete nvme_qpair. 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 386719 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 386719 ']' 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 386719 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 386719 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 386719' 00:26:30.480 killing process with pid 386719 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 386719 00:26:30.480 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 386719 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.742 rmmod nvme_tcp 00:26:30.742 rmmod nvme_fabrics 00:26:30.742 rmmod nvme_keyring 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 386432 ']' 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 386432 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 386432 ']' 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 386432 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 386432 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 386432' 00:26:30.742 killing process with pid 386432 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 386432 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 386432 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.742 15:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.290 15:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.290 00:26:33.290 real 0m22.868s 00:26:33.290 user 0m27.400s 00:26:33.290 sys 0m6.485s 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.290 ************************************ 00:26:33.290 END TEST nvmf_discovery_remove_ifc 00:26:33.290 ************************************ 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.290 ************************************ 00:26:33.290 START TEST nvmf_identify_kernel_target 00:26:33.290 ************************************ 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:33.290 * Looking for test storage... 00:26:33.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.290 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.291 15:22:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.883 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:39.884 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:39.884 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:39.884 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:39.884 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.884 15:22:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.884 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.884 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.884 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:39.884 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:26:40.147 00:26:40.147 --- 10.0.0.2 ping statistics --- 00:26:40.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.147 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.538 ms 00:26:40.147 00:26:40.147 --- 10.0.0.1 ping statistics --- 00:26:40.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.147 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:40.147 15:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:43.488 Waiting for block devices as requested 00:26:43.488 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:43.488 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:43.488 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:43.488 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:43.488 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:43.748 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:43.748 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:43.748 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:44.009 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:44.009 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:44.270 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:44.270 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:44.270 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:44.270 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:44.531 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:44.531 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:44.531 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:44.791 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:44.791 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:44.791 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:44.791 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:44.791 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:44.791 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:44.791 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:44.791 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:44.791 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:44.791 No valid GPT data, bailing 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:44.792 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:45.053 15:22:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:45.053 00:26:45.053 Discovery Log Number of Records 2, Generation counter 2 00:26:45.053 =====Discovery Log Entry 0====== 00:26:45.053 trtype: tcp 00:26:45.053 adrfam: ipv4 00:26:45.053 subtype: current discovery subsystem 00:26:45.053 treq: not specified, sq flow control disable supported 00:26:45.053 portid: 1 00:26:45.053 trsvcid: 4420 00:26:45.053 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:45.053 traddr: 10.0.0.1 00:26:45.053 eflags: none 00:26:45.053 sectype: none 00:26:45.053 =====Discovery Log Entry 1====== 00:26:45.053 trtype: tcp 00:26:45.053 adrfam: ipv4 00:26:45.053 subtype: nvme subsystem 00:26:45.053 treq: not specified, sq flow control disable supported 00:26:45.053 portid: 1 00:26:45.053 trsvcid: 4420 00:26:45.053 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:45.053 traddr: 10.0.0.1 00:26:45.053 eflags: none 00:26:45.053 sectype: none 00:26:45.053 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:45.053 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:45.053 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.053 ===================================================== 00:26:45.053 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:45.053 ===================================================== 00:26:45.053 Controller Capabilities/Features 00:26:45.053 ================================ 00:26:45.053 Vendor ID: 0000 00:26:45.053 Subsystem Vendor ID: 0000 00:26:45.053 Serial Number: b0b24b25d82ca271c6fc 00:26:45.053 Model Number: Linux 00:26:45.053 Firmware Version: 6.7.0-68 00:26:45.053 Recommended Arb Burst: 0 00:26:45.053 IEEE OUI Identifier: 00 00 00 00:26:45.053 Multi-path I/O 00:26:45.053 May have multiple subsystem ports: No 00:26:45.053 May have multiple controllers: No 00:26:45.053 Associated with SR-IOV VF: No 00:26:45.053 Max Data Transfer Size: Unlimited 00:26:45.053 Max Number of Namespaces: 0 00:26:45.053 Max Number of I/O Queues: 1024 00:26:45.053 NVMe Specification Version (VS): 1.3 00:26:45.053 NVMe Specification Version (Identify): 1.3 00:26:45.053 Maximum Queue Entries: 1024 00:26:45.053 Contiguous Queues Required: No 00:26:45.053 Arbitration Mechanisms Supported 00:26:45.053 Weighted Round Robin: Not Supported 00:26:45.053 Vendor Specific: Not Supported 00:26:45.053 Reset Timeout: 7500 ms 00:26:45.053 Doorbell Stride: 4 bytes 00:26:45.053 NVM Subsystem Reset: Not Supported 00:26:45.053 Command Sets Supported 00:26:45.053 NVM Command Set: Supported 00:26:45.053 Boot Partition: Not Supported 00:26:45.053 Memory Page Size Minimum: 4096 bytes 00:26:45.053 Memory Page Size Maximum: 4096 bytes 00:26:45.053 Persistent Memory Region: Not Supported 00:26:45.053 Optional Asynchronous Events Supported 00:26:45.053 Namespace Attribute Notices: Not Supported 00:26:45.053 Firmware Activation Notices: Not Supported 00:26:45.053 ANA Change Notices: Not Supported 00:26:45.053 PLE Aggregate Log Change Notices: Not Supported 00:26:45.053 LBA Status Info Alert Notices: Not Supported 00:26:45.053 EGE Aggregate Log Change Notices: Not Supported 00:26:45.053 Normal NVM Subsystem Shutdown event: Not Supported 00:26:45.053 Zone Descriptor Change Notices: Not Supported 00:26:45.053 Discovery Log Change Notices: Supported 00:26:45.053 Controller Attributes 00:26:45.053 128-bit Host Identifier: Not Supported 00:26:45.053 Non-Operational Permissive Mode: Not Supported 00:26:45.053 NVM Sets: Not Supported 00:26:45.053 Read Recovery Levels: Not Supported 00:26:45.053 Endurance Groups: Not Supported 00:26:45.053 Predictable Latency Mode: Not Supported 00:26:45.053 Traffic Based Keep ALive: Not Supported 00:26:45.053 Namespace Granularity: Not Supported 00:26:45.053 SQ Associations: Not Supported 00:26:45.053 UUID List: Not Supported 00:26:45.053 Multi-Domain Subsystem: Not Supported 00:26:45.053 Fixed Capacity Management: Not Supported 00:26:45.053 Variable Capacity Management: Not Supported 00:26:45.053 Delete Endurance Group: Not Supported 00:26:45.053 Delete NVM Set: Not Supported 00:26:45.053 Extended LBA Formats Supported: Not Supported 00:26:45.053 Flexible Data Placement Supported: Not Supported 00:26:45.053 00:26:45.053 Controller Memory Buffer Support 00:26:45.053 ================================ 00:26:45.053 Supported: No 00:26:45.053 00:26:45.053 Persistent Memory Region Support 00:26:45.053 ================================ 00:26:45.053 Supported: No 00:26:45.053 00:26:45.053 Admin Command Set Attributes 00:26:45.053 ============================ 00:26:45.053 Security Send/Receive: Not Supported 00:26:45.053 Format NVM: Not Supported 00:26:45.053 Firmware Activate/Download: Not Supported 00:26:45.053 Namespace Management: Not Supported 00:26:45.053 Device Self-Test: Not Supported 00:26:45.053 Directives: Not Supported 00:26:45.054 NVMe-MI: Not Supported 00:26:45.054 Virtualization Management: Not Supported 00:26:45.054 Doorbell Buffer Config: Not Supported 00:26:45.054 Get LBA Status Capability: Not Supported 00:26:45.054 Command & Feature Lockdown Capability: Not Supported 00:26:45.054 Abort Command Limit: 1 00:26:45.054 Async Event Request Limit: 1 00:26:45.054 Number of Firmware Slots: N/A 00:26:45.054 Firmware Slot 1 Read-Only: N/A 00:26:45.054 Firmware Activation Without Reset: N/A 00:26:45.054 Multiple Update Detection Support: N/A 00:26:45.054 Firmware Update Granularity: No Information Provided 00:26:45.054 Per-Namespace SMART Log: No 00:26:45.054 Asymmetric Namespace Access Log Page: Not Supported 00:26:45.054 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:45.054 Command Effects Log Page: Not Supported 00:26:45.054 Get Log Page Extended Data: Supported 00:26:45.054 Telemetry Log Pages: Not Supported 00:26:45.054 Persistent Event Log Pages: Not Supported 00:26:45.054 Supported Log Pages Log Page: May Support 00:26:45.054 Commands Supported & Effects Log Page: Not Supported 00:26:45.054 Feature Identifiers & Effects Log Page:May Support 00:26:45.054 NVMe-MI Commands & Effects Log Page: May Support 00:26:45.054 Data Area 4 for Telemetry Log: Not Supported 00:26:45.054 Error Log Page Entries Supported: 1 00:26:45.054 Keep Alive: Not Supported 00:26:45.054 00:26:45.054 NVM Command Set Attributes 00:26:45.054 ========================== 00:26:45.054 Submission Queue Entry Size 00:26:45.054 Max: 1 00:26:45.054 Min: 1 00:26:45.054 Completion Queue Entry Size 00:26:45.054 Max: 1 00:26:45.054 Min: 1 00:26:45.054 Number of Namespaces: 0 00:26:45.054 Compare Command: Not Supported 00:26:45.054 Write Uncorrectable Command: Not Supported 00:26:45.054 Dataset Management Command: Not Supported 00:26:45.054 Write Zeroes Command: Not Supported 00:26:45.054 Set Features Save Field: Not Supported 00:26:45.054 Reservations: Not Supported 00:26:45.054 Timestamp: Not Supported 00:26:45.054 Copy: Not Supported 00:26:45.054 Volatile Write Cache: Not Present 00:26:45.054 Atomic Write Unit (Normal): 1 00:26:45.054 Atomic Write Unit (PFail): 1 00:26:45.054 Atomic Compare & Write Unit: 1 00:26:45.054 Fused Compare & Write: Not Supported 00:26:45.054 Scatter-Gather List 00:26:45.054 SGL Command Set: Supported 00:26:45.054 SGL Keyed: Not Supported 00:26:45.054 SGL Bit Bucket Descriptor: Not Supported 00:26:45.054 SGL Metadata Pointer: Not Supported 00:26:45.054 Oversized SGL: Not Supported 00:26:45.054 SGL Metadata Address: Not Supported 00:26:45.054 SGL Offset: Supported 00:26:45.054 Transport SGL Data Block: Not Supported 00:26:45.054 Replay Protected Memory Block: Not Supported 00:26:45.054 00:26:45.054 Firmware Slot Information 00:26:45.054 ========================= 00:26:45.054 Active slot: 0 00:26:45.054 00:26:45.054 00:26:45.054 Error Log 00:26:45.054 ========= 00:26:45.054 00:26:45.054 Active Namespaces 00:26:45.054 ================= 00:26:45.054 Discovery Log Page 00:26:45.054 ================== 00:26:45.054 Generation Counter: 2 00:26:45.054 Number of Records: 2 00:26:45.054 Record Format: 0 00:26:45.054 00:26:45.054 Discovery Log Entry 0 00:26:45.054 ---------------------- 00:26:45.054 Transport Type: 3 (TCP) 00:26:45.054 Address Family: 1 (IPv4) 00:26:45.054 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:45.054 Entry Flags: 00:26:45.054 Duplicate Returned Information: 0 00:26:45.054 Explicit Persistent Connection Support for Discovery: 0 00:26:45.054 Transport Requirements: 00:26:45.054 Secure Channel: Not Specified 00:26:45.054 Port ID: 1 (0x0001) 00:26:45.054 Controller ID: 65535 (0xffff) 00:26:45.054 Admin Max SQ Size: 32 00:26:45.054 Transport Service Identifier: 4420 00:26:45.054 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:45.054 Transport Address: 10.0.0.1 00:26:45.054 Discovery Log Entry 1 00:26:45.054 ---------------------- 00:26:45.054 Transport Type: 3 (TCP) 00:26:45.054 Address Family: 1 (IPv4) 00:26:45.054 Subsystem Type: 2 (NVM Subsystem) 00:26:45.054 Entry Flags: 00:26:45.054 Duplicate Returned Information: 0 00:26:45.054 Explicit Persistent Connection Support for Discovery: 0 00:26:45.054 Transport Requirements: 00:26:45.054 Secure Channel: Not Specified 00:26:45.054 Port ID: 1 (0x0001) 00:26:45.054 Controller ID: 65535 (0xffff) 00:26:45.054 Admin Max SQ Size: 32 00:26:45.054 Transport Service Identifier: 4420 00:26:45.054 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:45.054 Transport Address: 10.0.0.1 00:26:45.054 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:45.054 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.054 get_feature(0x01) failed 00:26:45.054 get_feature(0x02) failed 00:26:45.054 get_feature(0x04) failed 00:26:45.054 ===================================================== 00:26:45.054 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:45.054 ===================================================== 00:26:45.054 Controller Capabilities/Features 00:26:45.054 ================================ 00:26:45.054 Vendor ID: 0000 00:26:45.054 Subsystem Vendor ID: 0000 00:26:45.054 Serial Number: 176047e26585f44c877d 00:26:45.054 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:45.054 Firmware Version: 6.7.0-68 00:26:45.054 Recommended Arb Burst: 6 00:26:45.054 IEEE OUI Identifier: 00 00 00 00:26:45.054 Multi-path I/O 00:26:45.054 May have multiple subsystem ports: Yes 00:26:45.054 May have multiple controllers: Yes 00:26:45.054 Associated with SR-IOV VF: No 00:26:45.054 Max Data Transfer Size: Unlimited 00:26:45.054 Max Number of Namespaces: 1024 00:26:45.054 Max Number of I/O Queues: 128 00:26:45.054 NVMe Specification Version (VS): 1.3 00:26:45.054 NVMe Specification Version (Identify): 1.3 00:26:45.054 Maximum Queue Entries: 1024 00:26:45.054 Contiguous Queues Required: No 00:26:45.054 Arbitration Mechanisms Supported 00:26:45.054 Weighted Round Robin: Not Supported 00:26:45.054 Vendor Specific: Not Supported 00:26:45.054 Reset Timeout: 7500 ms 00:26:45.054 Doorbell Stride: 4 bytes 00:26:45.054 NVM Subsystem Reset: Not Supported 00:26:45.054 Command Sets Supported 00:26:45.054 NVM Command Set: Supported 00:26:45.054 Boot Partition: Not Supported 00:26:45.054 Memory Page Size Minimum: 4096 bytes 00:26:45.054 Memory Page Size Maximum: 4096 bytes 00:26:45.054 Persistent Memory Region: Not Supported 00:26:45.054 Optional Asynchronous Events Supported 00:26:45.054 Namespace Attribute Notices: Supported 00:26:45.054 Firmware Activation Notices: Not Supported 00:26:45.054 ANA Change Notices: Supported 00:26:45.054 PLE Aggregate Log Change Notices: Not Supported 00:26:45.054 LBA Status Info Alert Notices: Not Supported 00:26:45.054 EGE Aggregate Log Change Notices: Not Supported 00:26:45.054 Normal NVM Subsystem Shutdown event: Not Supported 00:26:45.054 Zone Descriptor Change Notices: Not Supported 00:26:45.054 Discovery Log Change Notices: Not Supported 00:26:45.054 Controller Attributes 00:26:45.054 128-bit Host Identifier: Supported 00:26:45.054 Non-Operational Permissive Mode: Not Supported 00:26:45.054 NVM Sets: Not Supported 00:26:45.054 Read Recovery Levels: Not Supported 00:26:45.054 Endurance Groups: Not Supported 00:26:45.054 Predictable Latency Mode: Not Supported 00:26:45.054 Traffic Based Keep ALive: Supported 00:26:45.054 Namespace Granularity: Not Supported 00:26:45.054 SQ Associations: Not Supported 00:26:45.054 UUID List: Not Supported 00:26:45.054 Multi-Domain Subsystem: Not Supported 00:26:45.054 Fixed Capacity Management: Not Supported 00:26:45.054 Variable Capacity Management: Not Supported 00:26:45.054 Delete Endurance Group: Not Supported 00:26:45.054 Delete NVM Set: Not Supported 00:26:45.054 Extended LBA Formats Supported: Not Supported 00:26:45.054 Flexible Data Placement Supported: Not Supported 00:26:45.054 00:26:45.054 Controller Memory Buffer Support 00:26:45.055 ================================ 00:26:45.055 Supported: No 00:26:45.055 00:26:45.055 Persistent Memory Region Support 00:26:45.055 ================================ 00:26:45.055 Supported: No 00:26:45.055 00:26:45.055 Admin Command Set Attributes 00:26:45.055 ============================ 00:26:45.055 Security Send/Receive: Not Supported 00:26:45.055 Format NVM: Not Supported 00:26:45.055 Firmware Activate/Download: Not Supported 00:26:45.055 Namespace Management: Not Supported 00:26:45.055 Device Self-Test: Not Supported 00:26:45.055 Directives: Not Supported 00:26:45.055 NVMe-MI: Not Supported 00:26:45.055 Virtualization Management: Not Supported 00:26:45.055 Doorbell Buffer Config: Not Supported 00:26:45.055 Get LBA Status Capability: Not Supported 00:26:45.055 Command & Feature Lockdown Capability: Not Supported 00:26:45.055 Abort Command Limit: 4 00:26:45.055 Async Event Request Limit: 4 00:26:45.055 Number of Firmware Slots: N/A 00:26:45.055 Firmware Slot 1 Read-Only: N/A 00:26:45.055 Firmware Activation Without Reset: N/A 00:26:45.055 Multiple Update Detection Support: N/A 00:26:45.055 Firmware Update Granularity: No Information Provided 00:26:45.055 Per-Namespace SMART Log: Yes 00:26:45.055 Asymmetric Namespace Access Log Page: Supported 00:26:45.055 ANA Transition Time : 10 sec 00:26:45.055 00:26:45.055 Asymmetric Namespace Access Capabilities 00:26:45.055 ANA Optimized State : Supported 00:26:45.055 ANA Non-Optimized State : Supported 00:26:45.055 ANA Inaccessible State : Supported 00:26:45.055 ANA Persistent Loss State : Supported 00:26:45.055 ANA Change State : Supported 00:26:45.055 ANAGRPID is not changed : No 00:26:45.055 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:45.055 00:26:45.055 ANA Group Identifier Maximum : 128 00:26:45.055 Number of ANA Group Identifiers : 128 00:26:45.055 Max Number of Allowed Namespaces : 1024 00:26:45.055 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:45.055 Command Effects Log Page: Supported 00:26:45.055 Get Log Page Extended Data: Supported 00:26:45.055 Telemetry Log Pages: Not Supported 00:26:45.055 Persistent Event Log Pages: Not Supported 00:26:45.055 Supported Log Pages Log Page: May Support 00:26:45.055 Commands Supported & Effects Log Page: Not Supported 00:26:45.055 Feature Identifiers & Effects Log Page:May Support 00:26:45.055 NVMe-MI Commands & Effects Log Page: May Support 00:26:45.055 Data Area 4 for Telemetry Log: Not Supported 00:26:45.055 Error Log Page Entries Supported: 128 00:26:45.055 Keep Alive: Supported 00:26:45.055 Keep Alive Granularity: 1000 ms 00:26:45.055 00:26:45.055 NVM Command Set Attributes 00:26:45.055 ========================== 00:26:45.055 Submission Queue Entry Size 00:26:45.055 Max: 64 00:26:45.055 Min: 64 00:26:45.055 Completion Queue Entry Size 00:26:45.055 Max: 16 00:26:45.055 Min: 16 00:26:45.055 Number of Namespaces: 1024 00:26:45.055 Compare Command: Not Supported 00:26:45.055 Write Uncorrectable Command: Not Supported 00:26:45.055 Dataset Management Command: Supported 00:26:45.055 Write Zeroes Command: Supported 00:26:45.055 Set Features Save Field: Not Supported 00:26:45.055 Reservations: Not Supported 00:26:45.055 Timestamp: Not Supported 00:26:45.055 Copy: Not Supported 00:26:45.055 Volatile Write Cache: Present 00:26:45.055 Atomic Write Unit (Normal): 1 00:26:45.055 Atomic Write Unit (PFail): 1 00:26:45.055 Atomic Compare & Write Unit: 1 00:26:45.055 Fused Compare & Write: Not Supported 00:26:45.055 Scatter-Gather List 00:26:45.055 SGL Command Set: Supported 00:26:45.055 SGL Keyed: Not Supported 00:26:45.055 SGL Bit Bucket Descriptor: Not Supported 00:26:45.055 SGL Metadata Pointer: Not Supported 00:26:45.055 Oversized SGL: Not Supported 00:26:45.055 SGL Metadata Address: Not Supported 00:26:45.055 SGL Offset: Supported 00:26:45.055 Transport SGL Data Block: Not Supported 00:26:45.055 Replay Protected Memory Block: Not Supported 00:26:45.055 00:26:45.055 Firmware Slot Information 00:26:45.055 ========================= 00:26:45.055 Active slot: 0 00:26:45.055 00:26:45.055 Asymmetric Namespace Access 00:26:45.055 =========================== 00:26:45.055 Change Count : 0 00:26:45.055 Number of ANA Group Descriptors : 1 00:26:45.055 ANA Group Descriptor : 0 00:26:45.055 ANA Group ID : 1 00:26:45.055 Number of NSID Values : 1 00:26:45.055 Change Count : 0 00:26:45.055 ANA State : 1 00:26:45.055 Namespace Identifier : 1 00:26:45.055 00:26:45.055 Commands Supported and Effects 00:26:45.055 ============================== 00:26:45.055 Admin Commands 00:26:45.055 -------------- 00:26:45.055 Get Log Page (02h): Supported 00:26:45.055 Identify (06h): Supported 00:26:45.055 Abort (08h): Supported 00:26:45.055 Set Features (09h): Supported 00:26:45.055 Get Features (0Ah): Supported 00:26:45.055 Asynchronous Event Request (0Ch): Supported 00:26:45.055 Keep Alive (18h): Supported 00:26:45.055 I/O Commands 00:26:45.055 ------------ 00:26:45.055 Flush (00h): Supported 00:26:45.055 Write (01h): Supported LBA-Change 00:26:45.055 Read (02h): Supported 00:26:45.055 Write Zeroes (08h): Supported LBA-Change 00:26:45.055 Dataset Management (09h): Supported 00:26:45.055 00:26:45.055 Error Log 00:26:45.055 ========= 00:26:45.055 Entry: 0 00:26:45.055 Error Count: 0x3 00:26:45.055 Submission Queue Id: 0x0 00:26:45.055 Command Id: 0x5 00:26:45.055 Phase Bit: 0 00:26:45.055 Status Code: 0x2 00:26:45.055 Status Code Type: 0x0 00:26:45.055 Do Not Retry: 1 00:26:45.055 Error Location: 0x28 00:26:45.055 LBA: 0x0 00:26:45.055 Namespace: 0x0 00:26:45.055 Vendor Log Page: 0x0 00:26:45.055 ----------- 00:26:45.055 Entry: 1 00:26:45.055 Error Count: 0x2 00:26:45.055 Submission Queue Id: 0x0 00:26:45.055 Command Id: 0x5 00:26:45.055 Phase Bit: 0 00:26:45.055 Status Code: 0x2 00:26:45.055 Status Code Type: 0x0 00:26:45.055 Do Not Retry: 1 00:26:45.055 Error Location: 0x28 00:26:45.055 LBA: 0x0 00:26:45.055 Namespace: 0x0 00:26:45.055 Vendor Log Page: 0x0 00:26:45.055 ----------- 00:26:45.055 Entry: 2 00:26:45.055 Error Count: 0x1 00:26:45.055 Submission Queue Id: 0x0 00:26:45.055 Command Id: 0x4 00:26:45.055 Phase Bit: 0 00:26:45.055 Status Code: 0x2 00:26:45.055 Status Code Type: 0x0 00:26:45.055 Do Not Retry: 1 00:26:45.055 Error Location: 0x28 00:26:45.055 LBA: 0x0 00:26:45.055 Namespace: 0x0 00:26:45.055 Vendor Log Page: 0x0 00:26:45.055 00:26:45.055 Number of Queues 00:26:45.055 ================ 00:26:45.055 Number of I/O Submission Queues: 128 00:26:45.055 Number of I/O Completion Queues: 128 00:26:45.055 00:26:45.055 ZNS Specific Controller Data 00:26:45.055 ============================ 00:26:45.055 Zone Append Size Limit: 0 00:26:45.055 00:26:45.055 00:26:45.055 Active Namespaces 00:26:45.055 ================= 00:26:45.055 get_feature(0x05) failed 00:26:45.055 Namespace ID:1 00:26:45.055 Command Set Identifier: NVM (00h) 00:26:45.055 Deallocate: Supported 00:26:45.055 Deallocated/Unwritten Error: Not Supported 00:26:45.055 Deallocated Read Value: Unknown 00:26:45.055 Deallocate in Write Zeroes: Not Supported 00:26:45.055 Deallocated Guard Field: 0xFFFF 00:26:45.055 Flush: Supported 00:26:45.055 Reservation: Not Supported 00:26:45.055 Namespace Sharing Capabilities: Multiple Controllers 00:26:45.055 Size (in LBAs): 3750748848 (1788GiB) 00:26:45.056 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:45.056 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:45.056 UUID: 3f0a9740-cd02-472e-b135-f451ff60fcda 00:26:45.056 Thin Provisioning: Not Supported 00:26:45.056 Per-NS Atomic Units: Yes 00:26:45.056 Atomic Write Unit (Normal): 8 00:26:45.056 Atomic Write Unit (PFail): 8 00:26:45.056 Preferred Write Granularity: 8 00:26:45.056 Atomic Compare & Write Unit: 8 00:26:45.056 Atomic Boundary Size (Normal): 0 00:26:45.056 Atomic Boundary Size (PFail): 0 00:26:45.056 Atomic Boundary Offset: 0 00:26:45.056 NGUID/EUI64 Never Reused: No 00:26:45.056 ANA group ID: 1 00:26:45.056 Namespace Write Protected: No 00:26:45.056 Number of LBA Formats: 1 00:26:45.056 Current LBA Format: LBA Format #00 00:26:45.056 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:45.056 00:26:45.056 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:45.056 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.056 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:45.056 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.056 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:45.056 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.056 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.056 rmmod nvme_tcp 00:26:45.316 rmmod nvme_fabrics 00:26:45.316 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.316 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:45.316 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:45.317 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:45.317 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.317 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.317 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.317 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.317 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.317 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.317 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.317 15:22:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:47.230 15:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:51.441 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:51.441 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:51.441 00:26:51.441 real 0m18.284s 00:26:51.441 user 0m4.946s 00:26:51.441 sys 0m10.260s 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.441 ************************************ 00:26:51.441 END TEST nvmf_identify_kernel_target 00:26:51.441 ************************************ 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.441 ************************************ 00:26:51.441 START TEST nvmf_auth_host 00:26:51.441 ************************************ 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:51.441 * Looking for test storage... 00:26:51.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.441 15:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:59.587 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:59.587 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:59.587 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.587 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:59.588 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:59.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:26:59.588 00:26:59.588 --- 10.0.0.2 ping statistics --- 00:26:59.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.588 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:26:59.588 00:26:59.588 --- 10.0.0.1 ping statistics --- 00:26:59.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.588 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=400618 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 400618 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 400618 ']' 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.588 15:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8f758ded6e8b1cf5ed253e8128a8d27b 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3WQ 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8f758ded6e8b1cf5ed253e8128a8d27b 0 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8f758ded6e8b1cf5ed253e8128a8d27b 0 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8f758ded6e8b1cf5ed253e8128a8d27b 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3WQ 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3WQ 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3WQ 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2746ddbd2754f15eb1d82b7ecad015c97ed751a4448e415c81f97611789d80c9 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oo1 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2746ddbd2754f15eb1d82b7ecad015c97ed751a4448e415c81f97611789d80c9 3 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2746ddbd2754f15eb1d82b7ecad015c97ed751a4448e415c81f97611789d80c9 3 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2746ddbd2754f15eb1d82b7ecad015c97ed751a4448e415c81f97611789d80c9 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oo1 00:26:59.588 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oo1 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oo1 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5b9841371d5f89f1ec90f2c8c3375d2d0ab18c51ad99fcb7 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UaU 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5b9841371d5f89f1ec90f2c8c3375d2d0ab18c51ad99fcb7 0 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5b9841371d5f89f1ec90f2c8c3375d2d0ab18c51ad99fcb7 0 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5b9841371d5f89f1ec90f2c8c3375d2d0ab18c51ad99fcb7 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UaU 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UaU 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.UaU 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:59.589 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b001c7ef445926b2b65544411a65d4559bcef2821af6fad0 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.w69 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b001c7ef445926b2b65544411a65d4559bcef2821af6fad0 2 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b001c7ef445926b2b65544411a65d4559bcef2821af6fad0 2 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b001c7ef445926b2b65544411a65d4559bcef2821af6fad0 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.w69 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.w69 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.w69 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=20d9de14b99a782cffec116b92fde210 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.h2X 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 20d9de14b99a782cffec116b92fde210 1 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 20d9de14b99a782cffec116b92fde210 1 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=20d9de14b99a782cffec116b92fde210 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.h2X 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.h2X 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.h2X 00:26:59.850 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8f1e4ac503f6550ac4b4468b4e6247dd 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gMV 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8f1e4ac503f6550ac4b4468b4e6247dd 1 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8f1e4ac503f6550ac4b4468b4e6247dd 1 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8f1e4ac503f6550ac4b4468b4e6247dd 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gMV 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gMV 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.gMV 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ee43f078be93818a4e1aebf9950b8e837a64802e96777600 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7q8 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ee43f078be93818a4e1aebf9950b8e837a64802e96777600 2 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ee43f078be93818a4e1aebf9950b8e837a64802e96777600 2 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ee43f078be93818a4e1aebf9950b8e837a64802e96777600 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:59.851 15:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7q8 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7q8 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7q8 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6c7a5e1274504752a640b6e1a50c5563 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QLv 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6c7a5e1274504752a640b6e1a50c5563 0 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6c7a5e1274504752a640b6e1a50c5563 0 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6c7a5e1274504752a640b6e1a50c5563 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:59.851 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QLv 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QLv 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.QLv 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e320e38b60c1889e955e84ca4056accc695f00c76599d38eca80a264fd14e50b 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jK6 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e320e38b60c1889e955e84ca4056accc695f00c76599d38eca80a264fd14e50b 3 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e320e38b60c1889e955e84ca4056accc695f00c76599d38eca80a264fd14e50b 3 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e320e38b60c1889e955e84ca4056accc695f00c76599d38eca80a264fd14e50b 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.112 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jK6 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jK6 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.jK6 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 400618 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 400618 ']' 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:00.113 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3WQ 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oo1 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oo1 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.UaU 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.w69 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w69 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.h2X 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.gMV ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gMV 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7q8 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.QLv ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.QLv 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.jK6 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:00.374 15:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:03.678 Waiting for block devices as requested 00:27:03.678 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:03.678 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:03.678 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:03.939 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:03.939 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:03.939 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:03.939 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:04.200 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:04.200 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:04.461 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:04.461 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:04.461 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:04.723 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:04.723 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:04.723 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:04.723 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:04.984 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:05.929 No valid GPT data, bailing 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:05.929 15:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:05.929 00:27:05.929 Discovery Log Number of Records 2, Generation counter 2 00:27:05.929 =====Discovery Log Entry 0====== 00:27:05.929 trtype: tcp 00:27:05.929 adrfam: ipv4 00:27:05.929 subtype: current discovery subsystem 00:27:05.929 treq: not specified, sq flow control disable supported 00:27:05.929 portid: 1 00:27:05.929 trsvcid: 4420 00:27:05.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:05.929 traddr: 10.0.0.1 00:27:05.929 eflags: none 00:27:05.929 sectype: none 00:27:05.929 =====Discovery Log Entry 1====== 00:27:05.929 trtype: tcp 00:27:05.929 adrfam: ipv4 00:27:05.929 subtype: nvme subsystem 00:27:05.929 treq: not specified, sq flow control disable supported 00:27:05.929 portid: 1 00:27:05.929 trsvcid: 4420 00:27:05.929 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:05.929 traddr: 10.0.0.1 00:27:05.929 eflags: none 00:27:05.929 sectype: none 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.929 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.194 nvme0n1 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.194 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.195 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.511 nvme0n1 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.511 nvme0n1 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.511 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.773 nvme0n1 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.773 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.035 15:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.035 nvme0n1 00:27:07.035 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.035 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.036 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.298 nvme0n1 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.298 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.560 nvme0n1 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.560 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.822 nvme0n1 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.822 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.823 15:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.085 nvme0n1 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.085 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.346 nvme0n1 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:08.346 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.347 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.608 nvme0n1 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.608 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.609 15:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.870 nvme0n1 00:27:08.870 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.870 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.870 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.870 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.870 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.870 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:09.131 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.132 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.394 nvme0n1 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.394 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.656 nvme0n1 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.656 15:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.918 nvme0n1 00:27:09.918 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.918 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.918 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.918 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.918 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.918 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.180 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.442 nvme0n1 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.442 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.016 nvme0n1 00:27:11.016 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.016 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.016 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.016 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.016 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.016 15:23:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.016 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.017 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.017 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.017 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.589 nvme0n1 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.590 15:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.162 nvme0n1 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.162 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.423 nvme0n1 00:27:12.423 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.423 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.423 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.423 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.423 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.684 15:23:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.256 nvme0n1 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.256 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.855 nvme0n1 00:27:13.855 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.856 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.856 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.856 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.856 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.856 15:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:13.856 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.117 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.689 nvme0n1 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:14.689 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.950 15:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.521 nvme0n1 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.521 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.781 15:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.351 nvme0n1 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.351 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.612 15:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.185 nvme0n1 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.185 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.446 nvme0n1 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.446 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.447 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.708 nvme0n1 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.708 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.709 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.709 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.709 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.709 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.709 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.709 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.709 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.709 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.709 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.969 nvme0n1 00:27:17.969 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.969 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.969 15:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.969 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.969 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.969 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.969 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.969 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.970 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.231 nvme0n1 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.231 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.232 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.494 nvme0n1 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.494 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.495 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.756 nvme0n1 00:27:18.756 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.756 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.756 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.756 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.756 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.756 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.756 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.756 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.756 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.757 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.019 nvme0n1 00:27:19.019 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.019 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.019 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.019 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.019 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.019 15:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.019 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.281 nvme0n1 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.281 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.543 nvme0n1 00:27:19.543 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.543 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.543 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.543 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.544 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.806 nvme0n1 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.806 15:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.075 nvme0n1 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.075 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.399 nvme0n1 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.399 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.971 nvme0n1 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.971 15:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.233 nvme0n1 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.233 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.494 nvme0n1 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.494 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.495 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.495 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.495 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.495 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.495 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.495 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.495 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.495 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.495 15:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.067 nvme0n1 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.068 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.641 nvme0n1 00:27:22.641 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.641 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.641 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.642 15:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.215 nvme0n1 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.215 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.788 nvme0n1 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.788 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.789 15:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.363 nvme0n1 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.363 15:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.936 nvme0n1 00:27:24.936 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.936 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.936 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.936 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.936 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.936 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.197 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.198 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.771 nvme0n1 00:27:25.771 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.771 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.771 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.771 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.771 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.771 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.032 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.032 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.032 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.032 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.032 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.032 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.032 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:26.032 15:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:26.032 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.033 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.605 nvme0n1 00:27:26.605 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.605 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.605 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.605 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.605 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.605 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:26.866 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.867 15:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.437 nvme0n1 00:27:27.437 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.437 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.437 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.437 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.437 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.438 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.699 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.699 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.699 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.699 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.699 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.699 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.699 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:27.699 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.700 15:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.273 nvme0n1 00:27:28.273 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.273 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.273 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.273 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.273 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.273 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.535 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.536 nvme0n1 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.536 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.797 nvme0n1 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:28.797 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.798 15:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.058 nvme0n1 00:27:29.058 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.058 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.058 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.058 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.059 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.321 nvme0n1 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.321 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.583 nvme0n1 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.583 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.845 nvme0n1 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.845 15:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 nvme0n1 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.107 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.370 nvme0n1 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.370 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.632 nvme0n1 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.632 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.633 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.633 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.894 nvme0n1 00:27:30.894 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.894 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.894 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.895 15:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.157 nvme0n1 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.157 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.419 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.419 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.419 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.681 nvme0n1 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.681 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.943 nvme0n1 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.943 15:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.943 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.205 nvme0n1 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.205 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.467 nvme0n1 00:27:32.467 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.467 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.467 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.467 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.467 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.467 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.729 15:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.990 nvme0n1 00:27:32.990 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.990 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.990 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.990 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.990 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.251 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.252 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.252 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.252 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.252 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.824 nvme0n1 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.824 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.825 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.825 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.825 15:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.086 nvme0n1 00:27:34.086 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.086 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.086 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.086 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.086 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.086 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.351 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.653 nvme0n1 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.653 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.914 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.915 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.915 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.915 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.915 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.915 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.915 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.915 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.915 15:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.175 nvme0n1 00:27:35.175 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.175 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.175 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.175 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.175 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.176 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY3NThkZWQ2ZThiMWNmNWVkMjUzZTgxMjhhOGQyN2LoxVMu: 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: ]] 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0NmRkYmQyNzU0ZjE1ZWIxZDgyYjdlY2FkMDE1Yzk3ZWQ3NTFhNDQ0OGU0MTVjODFmOTc2MTE3ODlkODBjOZWwAVE=: 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.436 15:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.008 nvme0n1 00:27:36.008 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.008 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.008 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.008 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.008 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.008 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.270 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.843 nvme0n1 00:27:36.843 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.843 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.843 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.843 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.843 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.843 15:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.843 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.843 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.843 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.843 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.104 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.104 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.104 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:37.104 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkOWRlMTRiOTlhNzgyY2ZmZWMxMTZiOTJmZGUyMTABRUtg: 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: ]] 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGYxZTRhYzUwM2Y2NTUwYWM0YjQ0NjhiNGU2MjQ3ZGS9W9fL: 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.105 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.678 nvme0n1 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWU0M2YwNzhiZTkzODE4YTRlMWFlYmY5OTUwYjhlODM3YTY0ODAyZTk2Nzc3NjAwc68UWw==: 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: ]] 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM3YTVlMTI3NDUwNDc1MmE2NDBiNmUxYTUwYzU1NjPzO+GX: 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.678 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.940 15:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.512 nvme0n1 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.512 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyMGUzOGI2MGMxODg5ZTk1NWU4NGNhNDA1NmFjY2M2OTVmMDBjNzY1OTlkMzhlY2E4MGEyNjRmZDE0ZTUwYimH2Zc=: 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.513 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.773 15:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.345 nvme0n1 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI5ODQxMzcxZDVmODlmMWVjOTBmMmM4YzMzNzVkMmQwYWIxOGM1MWFkOTlmY2I3dOOAPg==: 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: ]] 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAwMWM3ZWY0NDU5MjZiMmI2NTU0NDQxMWE2NWQ0NTU5YmNlZjI4MjFhZjZmYWQwblMWtQ==: 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.345 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.606 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.607 request: 00:27:39.607 { 00:27:39.607 "name": "nvme0", 00:27:39.607 "trtype": "tcp", 00:27:39.607 "traddr": "10.0.0.1", 00:27:39.607 "adrfam": "ipv4", 00:27:39.607 "trsvcid": "4420", 00:27:39.607 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:39.607 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:39.607 "prchk_reftag": false, 00:27:39.607 "prchk_guard": false, 00:27:39.607 "hdgst": false, 00:27:39.607 "ddgst": false, 00:27:39.607 "method": "bdev_nvme_attach_controller", 00:27:39.607 "req_id": 1 00:27:39.607 } 00:27:39.607 Got JSON-RPC error response 00:27:39.607 response: 00:27:39.607 { 00:27:39.607 "code": -5, 00:27:39.607 "message": "Input/output error" 00:27:39.607 } 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.607 request: 00:27:39.607 { 00:27:39.607 "name": "nvme0", 00:27:39.607 "trtype": "tcp", 00:27:39.607 "traddr": "10.0.0.1", 00:27:39.607 "adrfam": "ipv4", 00:27:39.607 "trsvcid": "4420", 00:27:39.607 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:39.607 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:39.607 "prchk_reftag": false, 00:27:39.607 "prchk_guard": false, 00:27:39.607 "hdgst": false, 00:27:39.607 "ddgst": false, 00:27:39.607 "dhchap_key": "key2", 00:27:39.607 "method": "bdev_nvme_attach_controller", 00:27:39.607 "req_id": 1 00:27:39.607 } 00:27:39.607 Got JSON-RPC error response 00:27:39.607 response: 00:27:39.607 { 00:27:39.607 "code": -5, 00:27:39.607 "message": "Input/output error" 00:27:39.607 } 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.607 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.868 request: 00:27:39.868 { 00:27:39.868 "name": "nvme0", 00:27:39.868 "trtype": "tcp", 00:27:39.868 "traddr": "10.0.0.1", 00:27:39.868 "adrfam": "ipv4", 00:27:39.868 "trsvcid": "4420", 00:27:39.868 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:39.868 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:39.868 "prchk_reftag": false, 00:27:39.868 "prchk_guard": false, 00:27:39.868 "hdgst": false, 00:27:39.868 "ddgst": false, 00:27:39.868 "dhchap_key": "key1", 00:27:39.869 "dhchap_ctrlr_key": "ckey2", 00:27:39.869 "method": "bdev_nvme_attach_controller", 00:27:39.869 "req_id": 1 00:27:39.869 } 00:27:39.869 Got JSON-RPC error response 00:27:39.869 response: 00:27:39.869 { 00:27:39.869 "code": -5, 00:27:39.869 "message": "Input/output error" 00:27:39.869 } 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.869 rmmod nvme_tcp 00:27:39.869 rmmod nvme_fabrics 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 400618 ']' 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 400618 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 400618 ']' 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 400618 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 400618 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 400618' 00:27:39.869 killing process with pid 400618 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 400618 00:27:39.869 15:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 400618 00:27:40.130 15:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:40.130 15:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:40.130 15:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:40.130 15:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:40.130 15:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:40.130 15:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.130 15:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.130 15:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.044 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:42.044 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:42.044 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:42.044 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:42.045 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:42.045 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:42.045 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:42.045 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:42.045 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:42.045 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:42.045 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:42.045 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:42.045 15:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:45.351 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:45.351 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:45.611 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:45.611 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:45.611 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:45.611 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:45.873 15:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3WQ /tmp/spdk.key-null.UaU /tmp/spdk.key-sha256.h2X /tmp/spdk.key-sha384.7q8 /tmp/spdk.key-sha512.jK6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:45.873 15:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:49.176 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:49.176 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:49.176 00:27:49.176 real 0m57.839s 00:27:49.176 user 0m52.123s 00:27:49.176 sys 0m14.632s 00:27:49.176 15:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:49.176 15:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.176 ************************************ 00:27:49.176 END TEST nvmf_auth_host 00:27:49.176 ************************************ 00:27:49.176 15:23:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:49.176 15:23:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:49.176 15:23:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:49.176 15:23:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:49.176 15:23:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.177 ************************************ 00:27:49.177 START TEST nvmf_digest 00:27:49.177 ************************************ 00:27:49.177 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:49.439 * Looking for test storage... 00:27:49.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:49.439 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:49.440 15:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:56.033 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:56.033 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:56.033 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:56.033 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:56.033 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.295 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.295 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.295 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.295 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:56.295 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.295 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:56.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:27:56.589 00:27:56.589 --- 10.0.0.2 ping statistics --- 00:27:56.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.589 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:27:56.589 00:27:56.589 --- 10.0.0.1 ping statistics --- 00:27:56.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.589 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.589 ************************************ 00:27:56.589 START TEST nvmf_digest_clean 00:27:56.589 ************************************ 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=417249 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 417249 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 417249 ']' 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:56.589 15:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.589 [2024-07-25 15:23:48.659334] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:56.589 [2024-07-25 15:23:48.659383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.589 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.589 [2024-07-25 15:23:48.723044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.850 [2024-07-25 15:23:48.785904] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.850 [2024-07-25 15:23:48.785938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.850 [2024-07-25 15:23:48.785946] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.850 [2024-07-25 15:23:48.785952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.850 [2024-07-25 15:23:48.785958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.850 [2024-07-25 15:23:48.785975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:57.422 null0 00:27:57.422 [2024-07-25 15:23:49.540620] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.422 [2024-07-25 15:23:49.564807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=417363 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 417363 /var/tmp/bperf.sock 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 417363 ']' 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:57.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:57.422 15:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:57.683 [2024-07-25 15:23:49.616996] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:57.683 [2024-07-25 15:23:49.617048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417363 ] 00:27:57.683 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.683 [2024-07-25 15:23:49.691897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.683 [2024-07-25 15:23:49.755905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.256 15:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.256 15:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:58.256 15:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:58.256 15:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:58.256 15:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:58.517 15:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:58.517 15:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:58.778 nvme0n1 00:27:58.778 15:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:58.778 15:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:58.778 Running I/O for 2 seconds... 00:28:01.329 00:28:01.329 Latency(us) 00:28:01.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.329 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:01.329 nvme0n1 : 2.00 20881.60 81.57 0.00 0.00 6121.30 2990.08 18896.21 00:28:01.329 =================================================================================================================== 00:28:01.329 Total : 20881.60 81.57 0.00 0.00 6121.30 2990.08 18896.21 00:28:01.329 0 00:28:01.329 15:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:01.329 15:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:01.329 15:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:01.329 15:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:01.329 | select(.opcode=="crc32c") 00:28:01.329 | "\(.module_name) \(.executed)"' 00:28:01.329 15:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:01.329 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:01.329 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:01.329 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:01.329 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:01.329 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 417363 00:28:01.329 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 417363 ']' 00:28:01.329 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 417363 00:28:01.329 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:01.329 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 417363 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 417363' 00:28:01.330 killing process with pid 417363 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 417363 00:28:01.330 Received shutdown signal, test time was about 2.000000 seconds 00:28:01.330 00:28:01.330 Latency(us) 00:28:01.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.330 =================================================================================================================== 00:28:01.330 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 417363 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=418133 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 418133 /var/tmp/bperf.sock 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 418133 ']' 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:01.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.330 15:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.330 [2024-07-25 15:23:53.368121] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:01.330 [2024-07-25 15:23:53.368179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418133 ] 00:28:01.330 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:01.330 Zero copy mechanism will not be used. 00:28:01.330 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.330 [2024-07-25 15:23:53.442593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.330 [2024-07-25 15:23:53.506248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.276 15:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.276 15:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:02.276 15:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:02.276 15:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:02.276 15:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:02.276 15:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.276 15:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.537 nvme0n1 00:28:02.538 15:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:02.538 15:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.538 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.538 Zero copy mechanism will not be used. 00:28:02.538 Running I/O for 2 seconds... 00:28:05.086 00:28:05.086 Latency(us) 00:28:05.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.086 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:05.086 nvme0n1 : 2.01 1930.13 241.27 0.00 0.00 8284.18 1952.43 14417.92 00:28:05.086 =================================================================================================================== 00:28:05.086 Total : 1930.13 241.27 0.00 0.00 8284.18 1952.43 14417.92 00:28:05.086 0 00:28:05.086 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:05.086 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:05.086 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:05.086 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:05.086 | select(.opcode=="crc32c") 00:28:05.087 | "\(.module_name) \(.executed)"' 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 418133 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 418133 ']' 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 418133 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 418133 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 418133' 00:28:05.087 killing process with pid 418133 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 418133 00:28:05.087 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.087 00:28:05.087 Latency(us) 00:28:05.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.087 =================================================================================================================== 00:28:05.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.087 15:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 418133 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=418898 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 418898 /var/tmp/bperf.sock 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 418898 ']' 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:05.087 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.087 [2024-07-25 15:23:57.101934] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:05.087 [2024-07-25 15:23:57.101991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418898 ] 00:28:05.087 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.087 [2024-07-25 15:23:57.176087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.087 [2024-07-25 15:23:57.229557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.030 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.030 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:06.030 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:06.030 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:06.030 15:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:06.030 15:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.030 15:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.291 nvme0n1 00:28:06.291 15:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:06.291 15:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.291 Running I/O for 2 seconds... 00:28:08.840 00:28:08.840 Latency(us) 00:28:08.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.840 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:08.840 nvme0n1 : 2.00 21870.48 85.43 0.00 0.00 5844.52 3932.16 15291.73 00:28:08.840 =================================================================================================================== 00:28:08.840 Total : 21870.48 85.43 0.00 0.00 5844.52 3932.16 15291.73 00:28:08.840 0 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:08.840 | select(.opcode=="crc32c") 00:28:08.840 | "\(.module_name) \(.executed)"' 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 418898 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 418898 ']' 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 418898 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 418898 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 418898' 00:28:08.840 killing process with pid 418898 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 418898 00:28:08.840 Received shutdown signal, test time was about 2.000000 seconds 00:28:08.840 00:28:08.840 Latency(us) 00:28:08.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.840 =================================================================================================================== 00:28:08.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.840 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 418898 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=419686 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 419686 /var/tmp/bperf.sock 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 419686 ']' 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.841 15:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:08.841 [2024-07-25 15:24:00.798570] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:08.841 [2024-07-25 15:24:00.798626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419686 ] 00:28:08.841 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:08.841 Zero copy mechanism will not be used. 00:28:08.841 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.841 [2024-07-25 15:24:00.875615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.841 [2024-07-25 15:24:00.928358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.411 15:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.411 15:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:09.411 15:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:09.411 15:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:09.411 15:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:09.672 15:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.672 15:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.934 nvme0n1 00:28:09.934 15:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:09.934 15:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.934 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:09.934 Zero copy mechanism will not be used. 00:28:09.934 Running I/O for 2 seconds... 00:28:12.481 00:28:12.481 Latency(us) 00:28:12.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.481 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:12.481 nvme0n1 : 2.01 2274.17 284.27 0.00 0.00 7022.02 5597.87 29709.65 00:28:12.481 =================================================================================================================== 00:28:12.481 Total : 2274.17 284.27 0.00 0.00 7022.02 5597.87 29709.65 00:28:12.481 0 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:12.481 | select(.opcode=="crc32c") 00:28:12.481 | "\(.module_name) \(.executed)"' 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 419686 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 419686 ']' 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 419686 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 419686 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 419686' 00:28:12.481 killing process with pid 419686 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 419686 00:28:12.481 Received shutdown signal, test time was about 2.000000 seconds 00:28:12.481 00:28:12.481 Latency(us) 00:28:12.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.481 =================================================================================================================== 00:28:12.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 419686 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 417249 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 417249 ']' 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 417249 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 417249 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 417249' 00:28:12.481 killing process with pid 417249 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 417249 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 417249 00:28:12.481 00:28:12.481 real 0m15.986s 00:28:12.481 user 0m31.587s 00:28:12.481 sys 0m3.049s 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.481 ************************************ 00:28:12.481 END TEST nvmf_digest_clean 00:28:12.481 ************************************ 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.481 ************************************ 00:28:12.481 START TEST nvmf_digest_error 00:28:12.481 ************************************ 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=420466 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 420466 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 420466 ']' 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:12.481 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.482 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:12.482 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.743 [2024-07-25 15:24:04.720038] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:12.743 [2024-07-25 15:24:04.720083] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.743 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.743 [2024-07-25 15:24:04.788216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.743 [2024-07-25 15:24:04.851034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.743 [2024-07-25 15:24:04.851072] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.743 [2024-07-25 15:24:04.851079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.743 [2024-07-25 15:24:04.851086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.743 [2024-07-25 15:24:04.851091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.743 [2024-07-25 15:24:04.851117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.743 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.743 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:12.743 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:12.743 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:12.743 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.004 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.004 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:13.004 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.004 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.004 [2024-07-25 15:24:04.943616] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:13.004 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.004 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:13.004 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:13.004 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.004 15:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.004 null0 00:28:13.004 [2024-07-25 15:24:05.020528] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.004 [2024-07-25 15:24:05.044732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=420492 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 420492 /var/tmp/bperf.sock 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 420492 ']' 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.004 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.005 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:13.005 [2024-07-25 15:24:05.097512] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:13.005 [2024-07-25 15:24:05.097557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420492 ] 00:28:13.005 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.005 [2024-07-25 15:24:05.171364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.266 [2024-07-25 15:24:05.225194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.839 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.839 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:13.839 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:13.839 15:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:13.839 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:13.839 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.839 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.839 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.839 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.839 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.100 nvme0n1 00:28:14.100 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:14.100 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.100 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.361 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.361 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:14.361 15:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.361 Running I/O for 2 seconds... 00:28:14.361 [2024-07-25 15:24:06.395319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.395353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.395363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.408775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.408797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.408805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.420775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.420793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.420801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.432795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.432815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.432822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.445106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.445126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.445132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.457927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.457946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.457952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.471052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.471070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.471076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.483393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.483410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.483416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.494704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.494722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.494729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.506780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.506798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.506805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.518610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.518628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.518634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.529835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.529853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.529863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.362 [2024-07-25 15:24:06.543854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.362 [2024-07-25 15:24:06.543871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.362 [2024-07-25 15:24:06.543878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.555680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.555698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.555704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.568551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.568568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.568574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.579487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.579504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.579510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.592085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.592103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.592109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.604683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.604699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.604706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.616406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.616424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.616430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.628230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.628247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.628254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.640581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.640598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.640605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.653429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.653446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.653452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.665643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.665660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.665666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.678135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.627 [2024-07-25 15:24:06.678152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.627 [2024-07-25 15:24:06.678159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.627 [2024-07-25 15:24:06.691098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.691115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.691121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.702196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.702216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.702222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.714334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.714352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.714358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.726752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.726769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.726776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.738950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.738968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.738977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.751592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.751609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.751615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.762573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.762591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.762598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.775380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.775398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.775404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.788706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.788723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.788729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.801161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.801178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.801186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.628 [2024-07-25 15:24:06.812292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.628 [2024-07-25 15:24:06.812310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.628 [2024-07-25 15:24:06.812316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.824271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.824289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.824295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.836510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.836528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.836535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.849773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.849793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.849800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.861920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.861937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.861943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.873642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.873659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.873666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.886302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.886319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.886325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.898656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.898673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.898679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.909996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.910013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.910020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.922041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.922058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.922064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.934240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.934258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.934264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.945886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.945903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.945910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.958652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.958670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.958676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.970827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.970844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.970851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.983668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.983685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.983691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:06.996910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:06.996927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:06.996934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:07.007379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:07.007396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:07.007402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:07.020140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:07.020157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:07.020163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:07.032664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:07.032682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:07.032689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:07.044718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:07.044735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:07.044742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:07.056691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:07.056708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:07.056717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:07.068692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:07.068710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:07.068717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:07.081646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.952 [2024-07-25 15:24:07.081663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.952 [2024-07-25 15:24:07.081670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.952 [2024-07-25 15:24:07.094349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.953 [2024-07-25 15:24:07.094367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.953 [2024-07-25 15:24:07.094375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.953 [2024-07-25 15:24:07.106530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.953 [2024-07-25 15:24:07.106547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.953 [2024-07-25 15:24:07.106554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.953 [2024-07-25 15:24:07.117962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:14.953 [2024-07-25 15:24:07.117980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.953 [2024-07-25 15:24:07.117986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.213 [2024-07-25 15:24:07.130385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.213 [2024-07-25 15:24:07.130404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.213 [2024-07-25 15:24:07.130410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.213 [2024-07-25 15:24:07.141708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.141725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.141732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.154053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.154070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.154076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.167894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.167914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.167921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.178133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.178149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.178156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.190545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.190561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.190568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.202610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.202627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.202633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.216448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.216465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.216471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.228179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.228195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.228205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.239324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.239341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.239348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.252271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.252288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.252294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.264927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.264945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.264951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.276967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.276984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.276991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.288879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.288896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.288903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.301269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.301286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.301292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.314039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.314056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.314063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.326357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.326375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.326381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.338049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.338066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.338073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.349610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.349627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.349634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.361704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.361721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.361727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.375312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.375331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.375342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.387034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.387051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.387058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.214 [2024-07-25 15:24:07.398323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.214 [2024-07-25 15:24:07.398341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.214 [2024-07-25 15:24:07.398347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.410480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.410497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.410504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.423484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.423501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.423508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.435348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.435366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.435372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.447074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.447092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.447099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.459674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.459691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.459698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.471884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.471901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.471907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.484597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.484614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.484621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.496754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.496771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.496778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.508070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.508087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.508094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.521728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.521745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.521751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.533347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.533365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.533371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.544875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.544892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.544899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.558890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.558907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.558914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.570312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.570329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.570336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.581951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.581969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.581979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.594867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.594885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.594892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.606369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.606386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.606393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.619625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.619644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.619650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.630583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.630600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.630607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.643525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.643542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.643550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.476 [2024-07-25 15:24:07.655934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.476 [2024-07-25 15:24:07.655951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-25 15:24:07.655958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.667936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.667955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.667961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.680323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.680341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.680347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.692647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.692668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.692675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.705476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.705494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.705500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.716713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.716730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.716736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.729762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.729781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.729787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.741949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.741966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.741973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.753237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.753254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.753260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.766488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.766505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.766512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.778404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.778422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.778429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.790177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.790194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.790205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.802244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.802262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.802269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.814087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.814105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.814112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.827589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.827607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.827613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.838979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.838996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.839002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.852016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.852033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.852040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.863736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.863754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.863760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.876478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.876496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.876502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.888087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.888105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.888112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.900318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.900344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.900353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.912497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.912515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.912521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 15:24:07.924480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:15.738 [2024-07-25 15:24:07.924497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 15:24:07.924504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:07.935958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:07.935976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:07.935982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:07.949058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:07.949075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:07.949082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:07.961058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:07.961076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:07.961082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:07.973056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:07.973073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:07.973080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:07.985472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:07.985489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:07.985496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:07.998307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:07.998324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:07.998331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:08.010690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:08.010710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:08.010717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:08.022205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:08.022223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:08.022229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:08.034416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:08.034433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:08.034440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.000 [2024-07-25 15:24:08.046579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.000 [2024-07-25 15:24:08.046596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.000 [2024-07-25 15:24:08.046603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.058955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.058972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.058979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.071004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.071022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.071028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.082829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.082847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.082855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.095166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.095183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.095189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.107617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.107635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.107641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.120241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.120258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.120264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.132950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.132966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.132973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.144149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.144165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.144172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.156316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.156333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.156340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.168739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.168756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.168763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.001 [2024-07-25 15:24:08.181193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.001 [2024-07-25 15:24:08.181213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.001 [2024-07-25 15:24:08.181220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.262 [2024-07-25 15:24:08.193473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.262 [2024-07-25 15:24:08.193490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.262 [2024-07-25 15:24:08.193497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.262 [2024-07-25 15:24:08.204941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.262 [2024-07-25 15:24:08.204958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.262 [2024-07-25 15:24:08.204965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.262 [2024-07-25 15:24:08.217688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.262 [2024-07-25 15:24:08.217705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.262 [2024-07-25 15:24:08.217715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.262 [2024-07-25 15:24:08.228727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.262 [2024-07-25 15:24:08.228744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.262 [2024-07-25 15:24:08.228750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.262 [2024-07-25 15:24:08.242277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.262 [2024-07-25 15:24:08.242295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.262 [2024-07-25 15:24:08.242301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.262 [2024-07-25 15:24:08.254415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.262 [2024-07-25 15:24:08.254432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.262 [2024-07-25 15:24:08.254439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.262 [2024-07-25 15:24:08.266632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.262 [2024-07-25 15:24:08.266650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.262 [2024-07-25 15:24:08.266656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.262 [2024-07-25 15:24:08.278077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.262 [2024-07-25 15:24:08.278094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.262 [2024-07-25 15:24:08.278101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.262 [2024-07-25 15:24:08.290592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.263 [2024-07-25 15:24:08.290609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.263 [2024-07-25 15:24:08.290616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.263 [2024-07-25 15:24:08.303030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.263 [2024-07-25 15:24:08.303047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.263 [2024-07-25 15:24:08.303054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.263 [2024-07-25 15:24:08.315372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.263 [2024-07-25 15:24:08.315389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.263 [2024-07-25 15:24:08.315396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.263 [2024-07-25 15:24:08.327353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.263 [2024-07-25 15:24:08.327374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.263 [2024-07-25 15:24:08.327381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.263 [2024-07-25 15:24:08.339178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.263 [2024-07-25 15:24:08.339195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.263 [2024-07-25 15:24:08.339205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.263 [2024-07-25 15:24:08.351642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.263 [2024-07-25 15:24:08.351659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.263 [2024-07-25 15:24:08.351666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.263 [2024-07-25 15:24:08.364045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.263 [2024-07-25 15:24:08.364062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.263 [2024-07-25 15:24:08.364069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.263 [2024-07-25 15:24:08.375945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd30cd0) 00:28:16.263 [2024-07-25 15:24:08.375961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.263 [2024-07-25 15:24:08.375968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.263 00:28:16.263 Latency(us) 00:28:16.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.263 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:16.263 nvme0n1 : 2.00 20802.28 81.26 0.00 0.00 6146.35 3263.15 18677.76 00:28:16.263 =================================================================================================================== 00:28:16.263 Total : 20802.28 81.26 0.00 0.00 6146.35 3263.15 18677.76 00:28:16.263 0 00:28:16.263 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:16.263 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:16.263 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:16.263 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:16.263 | .driver_specific 00:28:16.263 | .nvme_error 00:28:16.263 | .status_code 00:28:16.263 | .command_transient_transport_error' 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 420492 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 420492 ']' 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 420492 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 420492 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 420492' 00:28:16.524 killing process with pid 420492 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 420492 00:28:16.524 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.524 00:28:16.524 Latency(us) 00:28:16.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.524 =================================================================================================================== 00:28:16.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.524 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 420492 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=421174 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 421174 /var/tmp/bperf.sock 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 421174 ']' 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.785 15:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.785 [2024-07-25 15:24:08.782029] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:16.785 [2024-07-25 15:24:08.782086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid421174 ] 00:28:16.785 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.785 Zero copy mechanism will not be used. 00:28:16.785 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.785 [2024-07-25 15:24:08.855458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.785 [2024-07-25 15:24:08.907731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.357 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.357 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:17.357 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.357 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.618 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:17.618 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.618 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.618 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.618 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.618 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.879 nvme0n1 00:28:17.879 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:17.879 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.879 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.879 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.879 15:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:17.879 15:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.139 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.139 Zero copy mechanism will not be used. 00:28:18.139 Running I/O for 2 seconds... 00:28:18.139 [2024-07-25 15:24:10.111980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.112018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.112028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.130030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.130052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.130060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.147769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.147788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.147795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.167174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.167192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.167210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.183905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.183923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.183930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.200224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.200243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.200250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.216790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.216808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.216814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.234567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.234584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.234590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.252724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.252742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.252749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.271820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.271838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.271845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.290321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.290339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.290345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.307149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.307167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.307174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.139 [2024-07-25 15:24:10.322535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.139 [2024-07-25 15:24:10.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.139 [2024-07-25 15:24:10.322565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.338475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.338493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.338500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.355488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.355506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.355512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.370087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.370105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.370111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.388340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.388358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.388365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.404413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.404430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.404437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.421538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.421556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.421563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.438939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.438958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.438964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.454995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.455014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.455020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.469850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.469867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.469874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.486044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.486062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.486068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.503505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.503523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.503529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.518995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.519014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.519021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.537012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.537030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.537036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.553975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.553992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.553999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.571430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.571448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.571454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.400 [2024-07-25 15:24:10.587699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.400 [2024-07-25 15:24:10.587717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.400 [2024-07-25 15:24:10.587723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.603007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.603029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.603035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.620074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.620092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.620099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.637194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.637216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.637223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.654738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.654756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.654763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.672788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.672806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.672813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.691334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.691352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.691358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.709813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.709831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.709837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.727059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.727078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.727085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.744885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.744903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.744910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.759631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.759649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.759656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.776644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.776662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.776669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.795110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.795128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.795135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.811324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.811342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.811349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.829949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.829967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.829974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.662 [2024-07-25 15:24:10.842342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.662 [2024-07-25 15:24:10.842360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.662 [2024-07-25 15:24:10.842367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:10.858268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:10.858286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:10.858293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:10.874368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:10.874385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:10.874392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:10.889520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:10.889538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:10.889548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:10.905662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:10.905681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:10.905687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:10.922640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:10.922659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:10.922665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:10.938774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:10.938792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:10.938799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:10.955309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:10.955327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:10.955333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:10.971057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:10.971076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:10.971083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:10.989237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:10.989256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:10.989262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:11.005261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:11.005278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:11.005285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:11.021521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:11.021539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:11.021546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:11.040064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:11.040087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:11.040093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:11.055529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:11.055547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:11.055555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:11.069822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:11.069840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:11.069847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:11.082793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:11.082812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.924 [2024-07-25 15:24:11.082819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.924 [2024-07-25 15:24:11.099956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:18.924 [2024-07-25 15:24:11.099975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.925 [2024-07-25 15:24:11.099981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.187 [2024-07-25 15:24:11.117194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.187 [2024-07-25 15:24:11.117216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.187 [2024-07-25 15:24:11.117224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.187 [2024-07-25 15:24:11.132529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.187 [2024-07-25 15:24:11.132547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.187 [2024-07-25 15:24:11.132554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.187 [2024-07-25 15:24:11.148650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.187 [2024-07-25 15:24:11.148669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.187 [2024-07-25 15:24:11.148675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.187 [2024-07-25 15:24:11.165322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.187 [2024-07-25 15:24:11.165340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.187 [2024-07-25 15:24:11.165346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.187 [2024-07-25 15:24:11.181599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.187 [2024-07-25 15:24:11.181617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.187 [2024-07-25 15:24:11.181624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.187 [2024-07-25 15:24:11.201130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.201148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.201155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.217884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.217902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.217908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.234096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.234115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.234121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.250546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.250565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.250572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.267115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.267134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.267140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.281811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.281830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.281837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.298096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.298114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.298121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.315675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.315696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.315702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.332728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.332747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.332753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.347637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.347655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.347662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.188 [2024-07-25 15:24:11.363978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.188 [2024-07-25 15:24:11.363996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.188 [2024-07-25 15:24:11.364003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.450 [2024-07-25 15:24:11.378749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.450 [2024-07-25 15:24:11.378768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.450 [2024-07-25 15:24:11.378775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.450 [2024-07-25 15:24:11.396025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.450 [2024-07-25 15:24:11.396042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.450 [2024-07-25 15:24:11.396048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.450 [2024-07-25 15:24:11.414797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.450 [2024-07-25 15:24:11.414814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.450 [2024-07-25 15:24:11.414821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.450 [2024-07-25 15:24:11.431146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.450 [2024-07-25 15:24:11.431164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.450 [2024-07-25 15:24:11.431170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.450 [2024-07-25 15:24:11.446660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.450 [2024-07-25 15:24:11.446678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.450 [2024-07-25 15:24:11.446684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.450 [2024-07-25 15:24:11.463945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.450 [2024-07-25 15:24:11.463963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.450 [2024-07-25 15:24:11.463970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.450 [2024-07-25 15:24:11.479958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.450 [2024-07-25 15:24:11.479975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.450 [2024-07-25 15:24:11.479982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.450 [2024-07-25 15:24:11.496368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.450 [2024-07-25 15:24:11.496386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.450 [2024-07-25 15:24:11.496392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.450 [2024-07-25 15:24:11.511310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.450 [2024-07-25 15:24:11.511328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.450 [2024-07-25 15:24:11.511334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.451 [2024-07-25 15:24:11.529058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.451 [2024-07-25 15:24:11.529075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.451 [2024-07-25 15:24:11.529081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.451 [2024-07-25 15:24:11.546476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.451 [2024-07-25 15:24:11.546494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.451 [2024-07-25 15:24:11.546500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.451 [2024-07-25 15:24:11.562422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.451 [2024-07-25 15:24:11.562439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.451 [2024-07-25 15:24:11.562446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.451 [2024-07-25 15:24:11.578485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.451 [2024-07-25 15:24:11.578502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.451 [2024-07-25 15:24:11.578508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.451 [2024-07-25 15:24:11.595013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.451 [2024-07-25 15:24:11.595031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.451 [2024-07-25 15:24:11.595040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.451 [2024-07-25 15:24:11.610606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.451 [2024-07-25 15:24:11.610624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.451 [2024-07-25 15:24:11.610630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.451 [2024-07-25 15:24:11.628444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.451 [2024-07-25 15:24:11.628462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.451 [2024-07-25 15:24:11.628468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.646430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.646448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.646454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.664618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.664635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.664642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.680818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.680836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.680842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.697584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.697601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.697608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.714404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.714421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.714427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.732209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.732226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.732232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.749382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.749402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.749408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.764792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.764810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.764816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.782093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.782111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.782117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.798854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.712 [2024-07-25 15:24:11.798871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.712 [2024-07-25 15:24:11.798878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.712 [2024-07-25 15:24:11.815657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.713 [2024-07-25 15:24:11.815674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.713 [2024-07-25 15:24:11.815681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.713 [2024-07-25 15:24:11.830814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.713 [2024-07-25 15:24:11.830831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.713 [2024-07-25 15:24:11.830837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.713 [2024-07-25 15:24:11.848349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.713 [2024-07-25 15:24:11.848366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.713 [2024-07-25 15:24:11.848373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.713 [2024-07-25 15:24:11.866607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.713 [2024-07-25 15:24:11.866625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.713 [2024-07-25 15:24:11.866631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.713 [2024-07-25 15:24:11.883394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.713 [2024-07-25 15:24:11.883411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.713 [2024-07-25 15:24:11.883421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.713 [2024-07-25 15:24:11.898084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.713 [2024-07-25 15:24:11.898102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.713 [2024-07-25 15:24:11.898108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:11.913721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:11.913739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:11.913746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:11.929531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:11.929549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:11.929556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:11.946638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:11.946656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:11.946663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:11.961988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:11.962005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:11.962012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:11.978258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:11.978276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:11.978282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:11.993847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:11.993865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:11.993871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:12.010361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:12.010379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:12.010385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:12.025947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:12.025968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:12.025975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:12.042515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:12.042533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:12.042539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:12.057658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:12.057675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:12.057681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.974 [2024-07-25 15:24:12.074403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.974 [2024-07-25 15:24:12.074420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.974 [2024-07-25 15:24:12.074427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.975 [2024-07-25 15:24:12.089327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ad9f0) 00:28:19.975 [2024-07-25 15:24:12.089344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.975 [2024-07-25 15:24:12.089350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.975 00:28:19.975 Latency(us) 00:28:19.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.975 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:19.975 nvme0n1 : 2.01 1855.66 231.96 0.00 0.00 8614.76 6553.60 21736.11 00:28:19.975 =================================================================================================================== 00:28:19.975 Total : 1855.66 231.96 0.00 0.00 8614.76 6553.60 21736.11 00:28:19.975 0 00:28:19.975 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:19.975 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:19.975 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:19.975 | .driver_specific 00:28:19.975 | .nvme_error 00:28:19.975 | .status_code 00:28:19.975 | .command_transient_transport_error' 00:28:19.975 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 421174 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 421174 ']' 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 421174 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 421174 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 421174' 00:28:20.236 killing process with pid 421174 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 421174 00:28:20.236 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.236 00:28:20.236 Latency(us) 00:28:20.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.236 =================================================================================================================== 00:28:20.236 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.236 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 421174 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=422338 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 422338 /var/tmp/bperf.sock 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 422338 ']' 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:20.497 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.498 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:20.498 15:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.498 [2024-07-25 15:24:12.490804] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:20.498 [2024-07-25 15:24:12.490862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422338 ] 00:28:20.498 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.498 [2024-07-25 15:24:12.564137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.498 [2024-07-25 15:24:12.617361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.071 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:21.071 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:21.071 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.071 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.332 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:21.332 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.332 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.332 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.332 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.332 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.596 nvme0n1 00:28:21.596 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:21.596 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.596 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.596 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.596 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:21.596 15:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.596 Running I/O for 2 seconds... 00:28:21.858 [2024-07-25 15:24:13.813676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.814528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.814556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.825913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.826332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.826351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.838113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.838484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.838501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.850338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.850767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.850784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.862478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.862903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.862920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.874623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.875176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.875192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.886788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.887087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.887103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.898949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.899255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.899273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.911059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.911483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.911499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.923208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.923683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.923700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.935336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.935644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.935660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.947464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.947804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.947820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.959554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.959831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.959847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.971649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.971930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.971947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.983958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.984368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.984384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:13.996085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:13.996497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:13.996513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:14.008185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:14.008502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:14.008518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:14.020375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:14.020806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:14.020821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:14.032510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:14.032795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:14.032811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.858 [2024-07-25 15:24:14.044640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:21.858 [2024-07-25 15:24:14.045060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.858 [2024-07-25 15:24:14.045076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.056749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.057167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.057183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.068850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.069298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.069317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.080936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.081388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.081404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.093060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.093361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.093376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.105168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.105581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.105597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.117320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.117744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.117759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.129494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.129904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.129920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.141615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.142030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.142046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.153751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.154177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.154192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.165891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.166181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.166196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.177996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.178390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.178408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.190167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.190577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.190593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.202365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.202787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.202802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.214484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.214898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.214913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.226517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.226937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.226953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.238662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.239088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.239103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.250794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.251082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.251098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.262916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.263211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.263226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.275033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.275321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.275337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.121 [2024-07-25 15:24:14.287149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.121 [2024-07-25 15:24:14.287455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.121 [2024-07-25 15:24:14.287471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.122 [2024-07-25 15:24:14.299290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.122 [2024-07-25 15:24:14.299578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.122 [2024-07-25 15:24:14.299593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.383 [2024-07-25 15:24:14.311425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.383 [2024-07-25 15:24:14.311740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-07-25 15:24:14.311756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.383 [2024-07-25 15:24:14.323574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.383 [2024-07-25 15:24:14.323875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-07-25 15:24:14.323891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.383 [2024-07-25 15:24:14.335727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.383 [2024-07-25 15:24:14.336031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-07-25 15:24:14.336047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.383 [2024-07-25 15:24:14.347852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.383 [2024-07-25 15:24:14.348265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-07-25 15:24:14.348280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.383 [2024-07-25 15:24:14.359989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.383 [2024-07-25 15:24:14.360283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-07-25 15:24:14.360298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.383 [2024-07-25 15:24:14.372073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.383 [2024-07-25 15:24:14.372509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-07-25 15:24:14.372525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.383 [2024-07-25 15:24:14.384229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.383 [2024-07-25 15:24:14.384647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-07-25 15:24:14.384662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.383 [2024-07-25 15:24:14.396396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.383 [2024-07-25 15:24:14.396858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-07-25 15:24:14.396874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.383 [2024-07-25 15:24:14.408509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.383 [2024-07-25 15:24:14.408819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.383 [2024-07-25 15:24:14.408834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.420647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.420955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.420970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.432789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.433079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.433094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.444863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.445281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.445297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.457052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.457501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.457517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.469109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.469532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.469548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.481257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.481639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.481654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.493320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.493748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.493767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.505469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.505904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.505920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.517579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.517989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.518005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.529683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.530107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.530122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.541800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.542247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.542263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.553933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.554221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.554237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.384 [2024-07-25 15:24:14.566053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.384 [2024-07-25 15:24:14.566358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.384 [2024-07-25 15:24:14.566373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.578149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.578579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.578594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.590255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.590680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.590696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.602378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.602788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.602804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.614539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.614979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.614994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.626677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.626979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.626995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.638786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.639248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.639264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.650891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.651350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.651366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.663048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.663341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.663357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.675165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.675579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.675595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.687326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.687728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.646 [2024-07-25 15:24:14.687744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.646 [2024-07-25 15:24:14.699451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.646 [2024-07-25 15:24:14.699870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.699886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.711557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.711978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.711993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.723680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.724131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.724147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.735873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.736303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.736319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.747983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.748279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.748296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.760115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.760548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.760564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.772224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.772681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.772697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.784361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.784668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.784684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.796514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.796936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.796952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.808650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.809029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.809045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.820777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.821074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.821090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.647 [2024-07-25 15:24:14.832899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.647 [2024-07-25 15:24:14.833211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.647 [2024-07-25 15:24:14.833226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.845105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.845403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.845420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.857231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.857677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.857694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.869346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.869629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.869645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.881486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.881914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.881929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.893612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.893894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.893910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.905691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.906132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.906148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.917808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.918120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.918139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.929928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.930395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.930411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.942044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.942359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.942375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.954153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.954579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.954595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.966310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.966743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.966758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.978576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.978956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.978972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:14.990711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:14.991119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:14.991135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:15.002857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:15.003142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:15.003158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:15.014969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:15.015392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:15.015407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:15.027084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:15.027565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:15.027581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:15.039148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:15.039535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:15.039551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:15.051274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:15.051690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:15.051705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:15.063418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:15.063700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:15.063716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:15.075462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:15.075885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:15.075901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:15.087593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:22.910 [2024-07-25 15:24:15.087986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.910 [2024-07-25 15:24:15.088001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.910 [2024-07-25 15:24:15.099706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.172 [2024-07-25 15:24:15.100098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.100114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.111857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.112147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.112162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.124007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.124293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.124310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.136093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.136587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.136603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.148226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.148527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.148543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.160331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.160783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.160799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.172417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.172716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.172731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.184548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.184842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.184857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.196644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.196999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.197015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.208737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.209155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.209171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.220829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.221240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.221256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.233018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.233419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.233438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.245061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.245523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.245539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.257196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.257616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.257632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.269331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.269634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.269649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.281450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.281883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.281898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.293524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.294069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.294086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.305630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.306057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.306073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.317784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.318073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.318089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.329890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.330204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.330220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.342022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.342422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.342440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.173 [2024-07-25 15:24:15.354152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.173 [2024-07-25 15:24:15.354454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.173 [2024-07-25 15:24:15.354470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.366271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.366575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.366592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.378420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.378715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.378731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.390506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.390935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.390951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.402650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.402928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.402945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.414803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.415125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.415141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.426867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.427254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.427270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.438964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.439269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.439285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.451068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.451375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.451392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.463178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.463580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.463597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.475291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.475705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.475721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.487411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.487718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.487734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.499497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.499913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.499929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.511601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.511954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.511970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.523699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.524122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.524138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.535799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.536198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.536218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.547900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.548316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.548332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.560026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.560478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.560495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.572129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.572656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.572672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.584222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.584655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.584670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.596381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.596869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.596885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.608476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.608909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.608925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.435 [2024-07-25 15:24:15.620574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.435 [2024-07-25 15:24:15.620990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.435 [2024-07-25 15:24:15.621006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.697 [2024-07-25 15:24:15.632662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.697 [2024-07-25 15:24:15.633075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.697 [2024-07-25 15:24:15.633090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.697 [2024-07-25 15:24:15.644770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.697 [2024-07-25 15:24:15.645187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.697 [2024-07-25 15:24:15.645205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.697 [2024-07-25 15:24:15.656876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.697 [2024-07-25 15:24:15.657291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.697 [2024-07-25 15:24:15.657309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.697 [2024-07-25 15:24:15.669002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.697 [2024-07-25 15:24:15.669284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.697 [2024-07-25 15:24:15.669301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.697 [2024-07-25 15:24:15.681121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.697 [2024-07-25 15:24:15.681530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.697 [2024-07-25 15:24:15.681546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.697 [2024-07-25 15:24:15.693208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.697 [2024-07-25 15:24:15.693629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.697 [2024-07-25 15:24:15.693645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.697 [2024-07-25 15:24:15.705330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.697 [2024-07-25 15:24:15.705747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.698 [2024-07-25 15:24:15.705763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.698 [2024-07-25 15:24:15.717393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.698 [2024-07-25 15:24:15.717725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.698 [2024-07-25 15:24:15.717741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.698 [2024-07-25 15:24:15.729469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.698 [2024-07-25 15:24:15.729886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.698 [2024-07-25 15:24:15.729902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.698 [2024-07-25 15:24:15.741665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.698 [2024-07-25 15:24:15.742063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.698 [2024-07-25 15:24:15.742079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.698 [2024-07-25 15:24:15.753767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.698 [2024-07-25 15:24:15.754163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.698 [2024-07-25 15:24:15.754178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.698 [2024-07-25 15:24:15.765915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.698 [2024-07-25 15:24:15.766345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.698 [2024-07-25 15:24:15.766361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.698 [2024-07-25 15:24:15.778089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.698 [2024-07-25 15:24:15.778401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.698 [2024-07-25 15:24:15.778417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.698 [2024-07-25 15:24:15.790134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f00c0) with pdu=0x2000190feb58 00:28:23.698 [2024-07-25 15:24:15.790453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.698 [2024-07-25 15:24:15.790469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.698 00:28:23.698 Latency(us) 00:28:23.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.698 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:23.698 nvme0n1 : 2.01 20915.23 81.70 0.00 0.00 6108.17 5461.33 21517.65 00:28:23.698 =================================================================================================================== 00:28:23.698 Total : 20915.23 81.70 0.00 0.00 6108.17 5461.33 21517.65 00:28:23.698 0 00:28:23.698 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:23.698 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:23.698 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:23.698 | .driver_specific 00:28:23.698 | .nvme_error 00:28:23.698 | .status_code 00:28:23.698 | .command_transient_transport_error' 00:28:23.698 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:23.959 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:28:23.959 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 422338 00:28:23.959 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 422338 ']' 00:28:23.959 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 422338 00:28:23.959 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:23.959 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.959 15:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 422338 00:28:23.959 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:23.959 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:23.959 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 422338' 00:28:23.959 killing process with pid 422338 00:28:23.959 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 422338 00:28:23.959 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.959 00:28:23.959 Latency(us) 00:28:23.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.959 =================================================================================================================== 00:28:23.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.960 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 422338 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=423091 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 423091 /var/tmp/bperf.sock 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 423091 ']' 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:24.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.220 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.220 [2024-07-25 15:24:16.201608] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:24.220 [2024-07-25 15:24:16.201664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423091 ] 00:28:24.220 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:24.220 Zero copy mechanism will not be used. 00:28:24.220 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.220 [2024-07-25 15:24:16.274680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.220 [2024-07-25 15:24:16.327834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.162 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:25.162 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:25.162 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.162 15:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.162 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:25.162 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.162 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.162 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.162 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.162 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.423 nvme0n1 00:28:25.423 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:25.423 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.423 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.423 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.423 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:25.423 15:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.423 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.423 Zero copy mechanism will not be used. 00:28:25.423 Running I/O for 2 seconds... 00:28:25.423 [2024-07-25 15:24:17.524438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.423 [2024-07-25 15:24:17.524939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.423 [2024-07-25 15:24:17.524966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.423 [2024-07-25 15:24:17.540614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.423 [2024-07-25 15:24:17.540902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.423 [2024-07-25 15:24:17.540922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.423 [2024-07-25 15:24:17.555451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.423 [2024-07-25 15:24:17.555781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.423 [2024-07-25 15:24:17.555799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.423 [2024-07-25 15:24:17.568964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.423 [2024-07-25 15:24:17.569240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.423 [2024-07-25 15:24:17.569258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.423 [2024-07-25 15:24:17.583309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.424 [2024-07-25 15:24:17.583695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.424 [2024-07-25 15:24:17.583712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.424 [2024-07-25 15:24:17.597591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.424 [2024-07-25 15:24:17.597843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.424 [2024-07-25 15:24:17.597861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.424 [2024-07-25 15:24:17.611458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.424 [2024-07-25 15:24:17.611742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.424 [2024-07-25 15:24:17.611760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.685 [2024-07-25 15:24:17.625633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.685 [2024-07-25 15:24:17.625869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.685 [2024-07-25 15:24:17.625884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.685 [2024-07-25 15:24:17.639640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.685 [2024-07-25 15:24:17.639904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.685 [2024-07-25 15:24:17.639922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.685 [2024-07-25 15:24:17.653012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.685 [2024-07-25 15:24:17.653319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.685 [2024-07-25 15:24:17.653337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.685 [2024-07-25 15:24:17.666928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.685 [2024-07-25 15:24:17.667179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.667196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.679977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.680277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.680294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.694528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.694812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.694829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.708572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.708917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.708934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.723052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.723297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.723317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.737943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.738182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.738198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.751321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.751574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.751592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.764772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.765044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.765061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.777957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.778212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.778228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.792261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.792512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.792530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.806250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.806502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.806519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.820328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.820688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.820706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.834052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.834309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.834326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.848505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.848787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.848804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.862306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.686 [2024-07-25 15:24:17.862588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.686 [2024-07-25 15:24:17.862604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.686 [2024-07-25 15:24:17.875617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:17.875868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:17.875886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:17.890059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:17.890314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:17.890329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:17.903630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:17.904078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:17.904096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:17.917668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:17.917921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:17.917938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:17.930990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:17.931435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:17.931452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:17.945625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:17.946020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:17.946038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:17.959696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:17.959970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:17.959991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:17.972433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:17.972616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:17.972632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:17.986354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:17.986583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:17.986598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:18.000747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:18.000998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:18.001016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.948 [2024-07-25 15:24:18.013363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.948 [2024-07-25 15:24:18.013614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.948 [2024-07-25 15:24:18.013631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.949 [2024-07-25 15:24:18.027168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.949 [2024-07-25 15:24:18.027430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.949 [2024-07-25 15:24:18.027447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.949 [2024-07-25 15:24:18.041451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.949 [2024-07-25 15:24:18.041740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.949 [2024-07-25 15:24:18.041756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.949 [2024-07-25 15:24:18.056397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.949 [2024-07-25 15:24:18.056753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.949 [2024-07-25 15:24:18.056770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.949 [2024-07-25 15:24:18.070659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.949 [2024-07-25 15:24:18.070911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.949 [2024-07-25 15:24:18.070928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.949 [2024-07-25 15:24:18.083131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.949 [2024-07-25 15:24:18.083496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.949 [2024-07-25 15:24:18.083513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.949 [2024-07-25 15:24:18.097331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.949 [2024-07-25 15:24:18.097693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.949 [2024-07-25 15:24:18.097710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.949 [2024-07-25 15:24:18.110257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.949 [2024-07-25 15:24:18.110598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.949 [2024-07-25 15:24:18.110614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.949 [2024-07-25 15:24:18.125643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:25.949 [2024-07-25 15:24:18.125926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.949 [2024-07-25 15:24:18.125943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.138873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.139223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.139241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.153252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.153625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.153643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.166352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.166722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.166740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.179220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.179641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.179659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.191985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.192414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.192430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.205588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.206000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.206018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.219887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.220212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.220230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.233939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.234407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.234425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.248612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.248999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.249016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.261266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.261643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.261660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.274940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.275308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.275325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.289615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.289927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.289944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.302656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.302946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.302963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.316155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.316467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.316486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.330153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.330442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.330459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.343961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.344361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.344378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.357195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.357578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.357595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.369854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.370245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.370263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.383092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.211 [2024-07-25 15:24:18.383440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.211 [2024-07-25 15:24:18.383457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.211 [2024-07-25 15:24:18.397554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.212 [2024-07-25 15:24:18.397825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.212 [2024-07-25 15:24:18.397842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.411805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.412118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.473 [2024-07-25 15:24:18.412134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.427190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.427668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.473 [2024-07-25 15:24:18.427684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.441650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.442024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.473 [2024-07-25 15:24:18.442041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.455130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.455610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.473 [2024-07-25 15:24:18.455627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.469731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.470107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.473 [2024-07-25 15:24:18.470124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.484296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.484587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.473 [2024-07-25 15:24:18.484603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.499813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.500209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.473 [2024-07-25 15:24:18.500226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.513513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.513942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.473 [2024-07-25 15:24:18.513959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.526913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.527288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.473 [2024-07-25 15:24:18.527303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.473 [2024-07-25 15:24:18.541818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.473 [2024-07-25 15:24:18.542308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.474 [2024-07-25 15:24:18.542325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.474 [2024-07-25 15:24:18.556693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.474 [2024-07-25 15:24:18.557096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.474 [2024-07-25 15:24:18.557112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.474 [2024-07-25 15:24:18.571464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.474 [2024-07-25 15:24:18.571751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.474 [2024-07-25 15:24:18.571768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.474 [2024-07-25 15:24:18.584304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.474 [2024-07-25 15:24:18.584535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.474 [2024-07-25 15:24:18.584551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.474 [2024-07-25 15:24:18.596833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.474 [2024-07-25 15:24:18.597125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.474 [2024-07-25 15:24:18.597140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.474 [2024-07-25 15:24:18.611330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.474 [2024-07-25 15:24:18.611718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.474 [2024-07-25 15:24:18.611735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.474 [2024-07-25 15:24:18.624199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.474 [2024-07-25 15:24:18.624585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.474 [2024-07-25 15:24:18.624602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.474 [2024-07-25 15:24:18.637991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.474 [2024-07-25 15:24:18.638362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.474 [2024-07-25 15:24:18.638379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.474 [2024-07-25 15:24:18.651271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.474 [2024-07-25 15:24:18.651607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.474 [2024-07-25 15:24:18.651623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.663714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.664013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.664030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.676966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.677373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.677393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.690943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.691204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.691222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.703563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.703790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.703804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.716184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.716487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.716504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.728977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.729234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.729251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.743402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.743901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.743918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.757968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.758342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.758359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.771468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.771844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.771861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.785585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.785848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.785866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.800155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.800444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.800461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.736 [2024-07-25 15:24:18.813655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.736 [2024-07-25 15:24:18.813887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.736 [2024-07-25 15:24:18.813903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.737 [2024-07-25 15:24:18.826600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.737 [2024-07-25 15:24:18.826907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-25 15:24:18.826922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.737 [2024-07-25 15:24:18.839471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.737 [2024-07-25 15:24:18.839839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-25 15:24:18.839855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-25 15:24:18.852570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.737 [2024-07-25 15:24:18.852839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-25 15:24:18.852854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.737 [2024-07-25 15:24:18.865643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.737 [2024-07-25 15:24:18.865996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-25 15:24:18.866012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.737 [2024-07-25 15:24:18.879472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.737 [2024-07-25 15:24:18.879865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-25 15:24:18.879881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.737 [2024-07-25 15:24:18.892848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.737 [2024-07-25 15:24:18.893261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-25 15:24:18.893278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-25 15:24:18.905622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.737 [2024-07-25 15:24:18.905912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-25 15:24:18.905929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.737 [2024-07-25 15:24:18.919401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.737 [2024-07-25 15:24:18.919847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-25 15:24:18.919864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:18.932462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:18.932750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:18.932766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:18.946299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:18.946553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:18.946569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:18.958021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:18.958358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:18.958373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:18.970883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:18.971276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:18.971292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:18.983967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:18.984146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:18.984161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:18.997756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:18.998088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:18.998103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.012008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.012369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.012386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.025317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.025730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.025750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.038952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.039306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.039323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.053125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.053549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.053566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.066627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.066950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.066966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.079379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.079555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.079570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.092583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.092989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.093006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.106017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.106361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.106378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.118996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.119182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.119197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.132224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.132583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.132600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.146021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.146478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.146495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.159037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.159436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.159453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.171821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.172239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.172256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.999 [2024-07-25 15:24:19.184956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:26.999 [2024-07-25 15:24:19.185302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-25 15:24:19.185320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.261 [2024-07-25 15:24:19.198310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.261 [2024-07-25 15:24:19.198716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-25 15:24:19.198733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.261 [2024-07-25 15:24:19.212464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.212744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.212761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.226691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.226943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.226960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.239825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.240076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.240093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.251769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.252021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.252040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.265395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.265799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.265816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.278327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.278466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.278481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.292006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.292388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.292405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.305934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.306284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.306301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.317884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.318360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.318377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.331567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.331930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.331946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.346414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.346793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.346811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.360096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.360337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.360352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.374095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.374480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.374497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.389043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.389309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.389325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.402741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.402971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.402986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.415761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.416217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.416235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.430620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.430855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.430870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-25 15:24:19.444918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.262 [2024-07-25 15:24:19.445145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-25 15:24:19.445160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.523 [2024-07-25 15:24:19.459018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.523 [2024-07-25 15:24:19.459275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.523 [2024-07-25 15:24:19.459292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.523 [2024-07-25 15:24:19.472656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.523 [2024-07-25 15:24:19.473016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.523 [2024-07-25 15:24:19.473034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.523 [2024-07-25 15:24:19.485654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.523 [2024-07-25 15:24:19.485934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.523 [2024-07-25 15:24:19.485951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.523 [2024-07-25 15:24:19.498745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f0400) with pdu=0x2000190fef90 00:28:27.523 [2024-07-25 15:24:19.498977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.523 [2024-07-25 15:24:19.498994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.523 00:28:27.523 Latency(us) 00:28:27.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.524 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:27.524 nvme0n1 : 2.01 2242.89 280.36 0.00 0.00 7120.51 5597.87 23156.05 00:28:27.524 =================================================================================================================== 00:28:27.524 Total : 2242.89 280.36 0.00 0.00 7120.51 5597.87 23156.05 00:28:27.524 0 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:27.524 | .driver_specific 00:28:27.524 | .nvme_error 00:28:27.524 | .status_code 00:28:27.524 | .command_transient_transport_error' 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 423091 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 423091 ']' 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 423091 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.524 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 423091 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 423091' 00:28:27.788 killing process with pid 423091 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 423091 00:28:27.788 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.788 00:28:27.788 Latency(us) 00:28:27.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.788 =================================================================================================================== 00:28:27.788 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 423091 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 420466 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 420466 ']' 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 420466 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.788 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 420466 00:28:27.789 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:27.789 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:27.789 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 420466' 00:28:27.789 killing process with pid 420466 00:28:27.789 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 420466 00:28:27.789 15:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 420466 00:28:28.130 00:28:28.130 real 0m15.390s 00:28:28.130 user 0m30.911s 00:28:28.130 sys 0m3.083s 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.130 ************************************ 00:28:28.130 END TEST nvmf_digest_error 00:28:28.130 ************************************ 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:28.130 rmmod nvme_tcp 00:28:28.130 rmmod nvme_fabrics 00:28:28.130 rmmod nvme_keyring 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 420466 ']' 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 420466 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 420466 ']' 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 420466 00:28:28.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (420466) - No such process 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 420466 is not found' 00:28:28.130 Process with pid 420466 is not found 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:28.130 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:28.131 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.131 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.131 15:24:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.679 15:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:30.680 00:28:30.680 real 0m40.891s 00:28:30.680 user 1m4.592s 00:28:30.680 sys 0m11.465s 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.680 ************************************ 00:28:30.680 END TEST nvmf_digest 00:28:30.680 ************************************ 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.680 ************************************ 00:28:30.680 START TEST nvmf_bdevperf 00:28:30.680 ************************************ 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:30.680 * Looking for test storage... 00:28:30.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.680 15:24:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:37.275 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:37.275 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:37.275 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:37.275 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:37.275 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.276 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.537 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.537 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.537 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.537 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.537 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.537 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.537 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:28:37.537 00:28:37.537 --- 10.0.0.2 ping statistics --- 00:28:37.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.538 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:28:37.538 00:28:37.538 --- 10.0.0.1 ping statistics --- 00:28:37.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.538 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=428014 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 428014 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 428014 ']' 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:37.538 15:24:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.799 [2024-07-25 15:24:29.743861] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:37.799 [2024-07-25 15:24:29.743926] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.799 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.799 [2024-07-25 15:24:29.833558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:37.799 [2024-07-25 15:24:29.927583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.799 [2024-07-25 15:24:29.927641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.799 [2024-07-25 15:24:29.927649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.799 [2024-07-25 15:24:29.927656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.799 [2024-07-25 15:24:29.927663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.799 [2024-07-25 15:24:29.927802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.799 [2024-07-25 15:24:29.927968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.799 [2024-07-25 15:24:29.927969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.370 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:38.370 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:38.370 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:38.370 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:38.370 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.631 [2024-07-25 15:24:30.577461] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.631 Malloc0 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.631 [2024-07-25 15:24:30.625106] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.631 { 00:28:38.631 "params": { 00:28:38.631 "name": "Nvme$subsystem", 00:28:38.631 "trtype": "$TEST_TRANSPORT", 00:28:38.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.631 "adrfam": "ipv4", 00:28:38.631 "trsvcid": "$NVMF_PORT", 00:28:38.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.631 "hdgst": ${hdgst:-false}, 00:28:38.631 "ddgst": ${ddgst:-false} 00:28:38.631 }, 00:28:38.631 "method": "bdev_nvme_attach_controller" 00:28:38.631 } 00:28:38.631 EOF 00:28:38.631 )") 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:38.631 15:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:38.631 "params": { 00:28:38.631 "name": "Nvme1", 00:28:38.631 "trtype": "tcp", 00:28:38.631 "traddr": "10.0.0.2", 00:28:38.631 "adrfam": "ipv4", 00:28:38.631 "trsvcid": "4420", 00:28:38.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.631 "hdgst": false, 00:28:38.631 "ddgst": false 00:28:38.631 }, 00:28:38.631 "method": "bdev_nvme_attach_controller" 00:28:38.631 }' 00:28:38.631 [2024-07-25 15:24:30.680429] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:38.631 [2024-07-25 15:24:30.680478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428174 ] 00:28:38.631 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.631 [2024-07-25 15:24:30.738287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.631 [2024-07-25 15:24:30.803006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.892 Running I/O for 1 seconds... 00:28:40.277 00:28:40.277 Latency(us) 00:28:40.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.277 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:40.277 Verification LBA range: start 0x0 length 0x4000 00:28:40.277 Nvme1n1 : 1.01 9513.42 37.16 0.00 0.00 13394.59 2703.36 19114.67 00:28:40.277 =================================================================================================================== 00:28:40.277 Total : 9513.42 37.16 0.00 0.00 13394.59 2703.36 19114.67 00:28:40.277 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=428419 00:28:40.277 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:40.277 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:40.278 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:40.278 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:40.278 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:40.278 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:40.278 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:40.278 { 00:28:40.278 "params": { 00:28:40.278 "name": "Nvme$subsystem", 00:28:40.278 "trtype": "$TEST_TRANSPORT", 00:28:40.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.278 "adrfam": "ipv4", 00:28:40.278 "trsvcid": "$NVMF_PORT", 00:28:40.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.278 "hdgst": ${hdgst:-false}, 00:28:40.278 "ddgst": ${ddgst:-false} 00:28:40.278 }, 00:28:40.278 "method": "bdev_nvme_attach_controller" 00:28:40.278 } 00:28:40.278 EOF 00:28:40.278 )") 00:28:40.278 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:40.278 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:40.278 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:40.278 15:24:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:40.278 "params": { 00:28:40.278 "name": "Nvme1", 00:28:40.278 "trtype": "tcp", 00:28:40.278 "traddr": "10.0.0.2", 00:28:40.278 "adrfam": "ipv4", 00:28:40.278 "trsvcid": "4420", 00:28:40.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.278 "hdgst": false, 00:28:40.278 "ddgst": false 00:28:40.278 }, 00:28:40.278 "method": "bdev_nvme_attach_controller" 00:28:40.278 }' 00:28:40.278 [2024-07-25 15:24:32.264419] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:40.278 [2024-07-25 15:24:32.264477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428419 ] 00:28:40.278 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.278 [2024-07-25 15:24:32.322067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.278 [2024-07-25 15:24:32.386710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.849 Running I/O for 15 seconds... 00:28:43.398 15:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 428014 00:28:43.398 15:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:43.398 [2024-07-25 15:24:35.230898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.398 [2024-07-25 15:24:35.230941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.398 [2024-07-25 15:24:35.230963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.398 [2024-07-25 15:24:35.230973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.398 [2024-07-25 15:24:35.230985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.398 [2024-07-25 15:24:35.230994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.398 [2024-07-25 15:24:35.231004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.398 [2024-07-25 15:24:35.231013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.398 [2024-07-25 15:24:35.231023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.398 [2024-07-25 15:24:35.231032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.398 [2024-07-25 15:24:35.231041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.398 [2024-07-25 15:24:35.231050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.398 [2024-07-25 15:24:35.231059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.398 [2024-07-25 15:24:35.231072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.398 [2024-07-25 15:24:35.231082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.398 [2024-07-25 15:24:35.231091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.399 [2024-07-25 15:24:35.231111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.399 [2024-07-25 15:24:35.231131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.399 [2024-07-25 15:24:35.231152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.399 [2024-07-25 15:24:35.231172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.399 [2024-07-25 15:24:35.231602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.399 [2024-07-25 15:24:35.231743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.399 [2024-07-25 15:24:35.231752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.231992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.231999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.400 [2024-07-25 15:24:35.232443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.400 [2024-07-25 15:24:35.232450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.401 [2024-07-25 15:24:35.232616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.401 [2024-07-25 15:24:35.232632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.401 [2024-07-25 15:24:35.232649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.401 [2024-07-25 15:24:35.232665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.401 [2024-07-25 15:24:35.232683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.401 [2024-07-25 15:24:35.232700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.401 [2024-07-25 15:24:35.232717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.401 [2024-07-25 15:24:35.232733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.401 [2024-07-25 15:24:35.232750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.232985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.232994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.233001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.233010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.233018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.233028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.401 [2024-07-25 15:24:35.233035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.401 [2024-07-25 15:24:35.233044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.402 [2024-07-25 15:24:35.233203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0e570 is same with the state(5) to be set 00:28:43.402 [2024-07-25 15:24:35.233222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:43.402 [2024-07-25 15:24:35.233228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:43.402 [2024-07-25 15:24:35.233235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92624 len:8 PRP1 0x0 PRP2 0x0 00:28:43.402 [2024-07-25 15:24:35.233243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233281] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e0e570 was disconnected and freed. reset controller. 00:28:43.402 [2024-07-25 15:24:35.233327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.402 [2024-07-25 15:24:35.233338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.402 [2024-07-25 15:24:35.233354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.402 [2024-07-25 15:24:35.233369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.402 [2024-07-25 15:24:35.233388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.402 [2024-07-25 15:24:35.233396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.402 [2024-07-25 15:24:35.236932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.402 [2024-07-25 15:24:35.236959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.402 [2024-07-25 15:24:35.237909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.402 [2024-07-25 15:24:35.237927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.402 [2024-07-25 15:24:35.237936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.402 [2024-07-25 15:24:35.238158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.402 [2024-07-25 15:24:35.238384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.402 [2024-07-25 15:24:35.238393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.402 [2024-07-25 15:24:35.238402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.402 [2024-07-25 15:24:35.241956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.402 [2024-07-25 15:24:35.251162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.402 [2024-07-25 15:24:35.251962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.402 [2024-07-25 15:24:35.252001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.402 [2024-07-25 15:24:35.252013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.402 [2024-07-25 15:24:35.252265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.402 [2024-07-25 15:24:35.252489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.402 [2024-07-25 15:24:35.252498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.402 [2024-07-25 15:24:35.252507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.402 [2024-07-25 15:24:35.256059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.402 [2024-07-25 15:24:35.265054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.402 [2024-07-25 15:24:35.265846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.402 [2024-07-25 15:24:35.265884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.402 [2024-07-25 15:24:35.265895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.402 [2024-07-25 15:24:35.266135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.402 [2024-07-25 15:24:35.266370] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.402 [2024-07-25 15:24:35.266380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.402 [2024-07-25 15:24:35.266388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.402 [2024-07-25 15:24:35.269957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.402 [2024-07-25 15:24:35.278955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.402 [2024-07-25 15:24:35.279687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.402 [2024-07-25 15:24:35.279725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.402 [2024-07-25 15:24:35.279736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.402 [2024-07-25 15:24:35.279975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.402 [2024-07-25 15:24:35.280199] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.402 [2024-07-25 15:24:35.280226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.402 [2024-07-25 15:24:35.280235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.402 [2024-07-25 15:24:35.283787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.402 [2024-07-25 15:24:35.292779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.402 [2024-07-25 15:24:35.293570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.402 [2024-07-25 15:24:35.293608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.402 [2024-07-25 15:24:35.293618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.402 [2024-07-25 15:24:35.293858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.402 [2024-07-25 15:24:35.294081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.402 [2024-07-25 15:24:35.294090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.402 [2024-07-25 15:24:35.294098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.402 [2024-07-25 15:24:35.297659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.402 [2024-07-25 15:24:35.306652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.402 [2024-07-25 15:24:35.307429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.403 [2024-07-25 15:24:35.307467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.403 [2024-07-25 15:24:35.307478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.403 [2024-07-25 15:24:35.307717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.403 [2024-07-25 15:24:35.307950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.403 [2024-07-25 15:24:35.307960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.403 [2024-07-25 15:24:35.307967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.403 [2024-07-25 15:24:35.311530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.403 [2024-07-25 15:24:35.320536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.403 [2024-07-25 15:24:35.321342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.403 [2024-07-25 15:24:35.321380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.403 [2024-07-25 15:24:35.321398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.403 [2024-07-25 15:24:35.321640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.403 [2024-07-25 15:24:35.321865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.403 [2024-07-25 15:24:35.321874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.403 [2024-07-25 15:24:35.321882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.403 [2024-07-25 15:24:35.325446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.403 [2024-07-25 15:24:35.334453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.403 [2024-07-25 15:24:35.335211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.403 [2024-07-25 15:24:35.335249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.403 [2024-07-25 15:24:35.335260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.403 [2024-07-25 15:24:35.335499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.403 [2024-07-25 15:24:35.335722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.403 [2024-07-25 15:24:35.335732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.403 [2024-07-25 15:24:35.335739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.403 [2024-07-25 15:24:35.339294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.403 [2024-07-25 15:24:35.348298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.403 [2024-07-25 15:24:35.348983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.403 [2024-07-25 15:24:35.349002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.403 [2024-07-25 15:24:35.349010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.403 [2024-07-25 15:24:35.349237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.403 [2024-07-25 15:24:35.349458] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.403 [2024-07-25 15:24:35.349468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.403 [2024-07-25 15:24:35.349475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.403 [2024-07-25 15:24:35.353021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.403 [2024-07-25 15:24:35.362226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.403 [2024-07-25 15:24:35.363013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.403 [2024-07-25 15:24:35.363051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.403 [2024-07-25 15:24:35.363062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.403 [2024-07-25 15:24:35.363312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.403 [2024-07-25 15:24:35.363537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.403 [2024-07-25 15:24:35.363551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.403 [2024-07-25 15:24:35.363558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.403 [2024-07-25 15:24:35.367112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.403 [2024-07-25 15:24:35.376118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.403 [2024-07-25 15:24:35.376938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.403 [2024-07-25 15:24:35.376976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.403 [2024-07-25 15:24:35.376987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.403 [2024-07-25 15:24:35.377237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.403 [2024-07-25 15:24:35.377461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.403 [2024-07-25 15:24:35.377470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.403 [2024-07-25 15:24:35.377478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.403 [2024-07-25 15:24:35.381027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.403 [2024-07-25 15:24:35.390027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.403 [2024-07-25 15:24:35.390835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.403 [2024-07-25 15:24:35.390873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.403 [2024-07-25 15:24:35.390884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.403 [2024-07-25 15:24:35.391123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.403 [2024-07-25 15:24:35.391357] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.403 [2024-07-25 15:24:35.391367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.403 [2024-07-25 15:24:35.391374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.403 [2024-07-25 15:24:35.394924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.403 [2024-07-25 15:24:35.403916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.403 [2024-07-25 15:24:35.404607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.403 [2024-07-25 15:24:35.404627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.403 [2024-07-25 15:24:35.404635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.403 [2024-07-25 15:24:35.404855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.403 [2024-07-25 15:24:35.405074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.403 [2024-07-25 15:24:35.405084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.403 [2024-07-25 15:24:35.405091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.403 [2024-07-25 15:24:35.408649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.403 [2024-07-25 15:24:35.417854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.418552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.418570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.418577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.418797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.419015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.419025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.404 [2024-07-25 15:24:35.419032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.404 [2024-07-25 15:24:35.422581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.404 [2024-07-25 15:24:35.431770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.432257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.432279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.432287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.432509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.432729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.432739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.404 [2024-07-25 15:24:35.432746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.404 [2024-07-25 15:24:35.436298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.404 [2024-07-25 15:24:35.445699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.446442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.446480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.446491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.446730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.446954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.446963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.404 [2024-07-25 15:24:35.446971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.404 [2024-07-25 15:24:35.450536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.404 [2024-07-25 15:24:35.459536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.460380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.460418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.460433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.460672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.460896] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.460906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.404 [2024-07-25 15:24:35.460914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.404 [2024-07-25 15:24:35.464479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.404 [2024-07-25 15:24:35.473497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.474302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.474340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.474350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.474590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.474813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.474822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.404 [2024-07-25 15:24:35.474830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.404 [2024-07-25 15:24:35.478391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.404 [2024-07-25 15:24:35.487389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.488153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.488191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.488213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.488454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.488677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.488686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.404 [2024-07-25 15:24:35.488694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.404 [2024-07-25 15:24:35.492251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.404 [2024-07-25 15:24:35.501261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.502078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.502116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.502127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.502378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.502602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.502616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.404 [2024-07-25 15:24:35.502624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.404 [2024-07-25 15:24:35.506173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.404 [2024-07-25 15:24:35.515172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.515812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.515850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.515861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.516100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.516332] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.516342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.404 [2024-07-25 15:24:35.516350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.404 [2024-07-25 15:24:35.519897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.404 [2024-07-25 15:24:35.529092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.529838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.529876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.529887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.530126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.530359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.530370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.404 [2024-07-25 15:24:35.530377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.404 [2024-07-25 15:24:35.533927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.404 [2024-07-25 15:24:35.542921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.404 [2024-07-25 15:24:35.543679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.404 [2024-07-25 15:24:35.543717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.404 [2024-07-25 15:24:35.543728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.404 [2024-07-25 15:24:35.543967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.404 [2024-07-25 15:24:35.544190] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.404 [2024-07-25 15:24:35.544210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.405 [2024-07-25 15:24:35.544218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.405 [2024-07-25 15:24:35.547769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.405 [2024-07-25 15:24:35.556763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.405 [2024-07-25 15:24:35.557471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.405 [2024-07-25 15:24:35.557509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.405 [2024-07-25 15:24:35.557520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.405 [2024-07-25 15:24:35.557760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.405 [2024-07-25 15:24:35.557983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.405 [2024-07-25 15:24:35.557992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.405 [2024-07-25 15:24:35.558000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.405 [2024-07-25 15:24:35.561562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.405 [2024-07-25 15:24:35.570774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.405 [2024-07-25 15:24:35.571566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.405 [2024-07-25 15:24:35.571604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.405 [2024-07-25 15:24:35.571615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.405 [2024-07-25 15:24:35.571854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.405 [2024-07-25 15:24:35.572078] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.405 [2024-07-25 15:24:35.572087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.405 [2024-07-25 15:24:35.572095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.405 [2024-07-25 15:24:35.575654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.405 [2024-07-25 15:24:35.584655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.667 [2024-07-25 15:24:35.585476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.667 [2024-07-25 15:24:35.585515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.667 [2024-07-25 15:24:35.585526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.667 [2024-07-25 15:24:35.585766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.667 [2024-07-25 15:24:35.585989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.667 [2024-07-25 15:24:35.585999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.667 [2024-07-25 15:24:35.586007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.667 [2024-07-25 15:24:35.589574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.667 [2024-07-25 15:24:35.598572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.667 [2024-07-25 15:24:35.599368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.667 [2024-07-25 15:24:35.599406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.667 [2024-07-25 15:24:35.599417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.667 [2024-07-25 15:24:35.599661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.667 [2024-07-25 15:24:35.599884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.667 [2024-07-25 15:24:35.599894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.667 [2024-07-25 15:24:35.599902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.667 [2024-07-25 15:24:35.603462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.667 [2024-07-25 15:24:35.612460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.667 [2024-07-25 15:24:35.613269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.667 [2024-07-25 15:24:35.613308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.667 [2024-07-25 15:24:35.613319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.667 [2024-07-25 15:24:35.613558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.667 [2024-07-25 15:24:35.613781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.667 [2024-07-25 15:24:35.613790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.667 [2024-07-25 15:24:35.613798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.667 [2024-07-25 15:24:35.617363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.667 [2024-07-25 15:24:35.626375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.667 [2024-07-25 15:24:35.627148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.667 [2024-07-25 15:24:35.627186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.667 [2024-07-25 15:24:35.627196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.667 [2024-07-25 15:24:35.627446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.627669] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.627679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.627686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.631241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.640249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.641075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.668 [2024-07-25 15:24:35.641113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.668 [2024-07-25 15:24:35.641124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.668 [2024-07-25 15:24:35.641372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.641596] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.641606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.641618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.645169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.654182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.654911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.668 [2024-07-25 15:24:35.654932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.668 [2024-07-25 15:24:35.654942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.668 [2024-07-25 15:24:35.655164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.655394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.655403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.655411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.658960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.668175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.668955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.668 [2024-07-25 15:24:35.668993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.668 [2024-07-25 15:24:35.669005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.668 [2024-07-25 15:24:35.669255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.669478] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.669488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.669496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.673049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.682058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.682772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.668 [2024-07-25 15:24:35.682791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.668 [2024-07-25 15:24:35.682800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.668 [2024-07-25 15:24:35.683020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.683246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.683256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.683263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.686813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.696026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.696836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.668 [2024-07-25 15:24:35.696878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.668 [2024-07-25 15:24:35.696889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.668 [2024-07-25 15:24:35.697128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.697359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.697370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.697377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.700933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.709947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.710379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.668 [2024-07-25 15:24:35.710399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.668 [2024-07-25 15:24:35.710407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.668 [2024-07-25 15:24:35.710628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.710848] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.710857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.710864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.714419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.723869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.724645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.668 [2024-07-25 15:24:35.724683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.668 [2024-07-25 15:24:35.724694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.668 [2024-07-25 15:24:35.724933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.725156] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.725165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.725173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.728732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.737741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.738537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.668 [2024-07-25 15:24:35.738576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.668 [2024-07-25 15:24:35.738587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.668 [2024-07-25 15:24:35.738827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.739059] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.739069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.739076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.742636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.751637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.752472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.668 [2024-07-25 15:24:35.752510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.668 [2024-07-25 15:24:35.752523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.668 [2024-07-25 15:24:35.752764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.668 [2024-07-25 15:24:35.752987] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.668 [2024-07-25 15:24:35.752997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.668 [2024-07-25 15:24:35.753005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.668 [2024-07-25 15:24:35.756572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.668 [2024-07-25 15:24:35.765585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.668 [2024-07-25 15:24:35.766245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.669 [2024-07-25 15:24:35.766271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.669 [2024-07-25 15:24:35.766279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.669 [2024-07-25 15:24:35.766505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.669 [2024-07-25 15:24:35.766725] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.669 [2024-07-25 15:24:35.766735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.669 [2024-07-25 15:24:35.766742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.669 [2024-07-25 15:24:35.770316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.669 [2024-07-25 15:24:35.779534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.669 [2024-07-25 15:24:35.780250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.669 [2024-07-25 15:24:35.780274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.669 [2024-07-25 15:24:35.780282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.669 [2024-07-25 15:24:35.780507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.669 [2024-07-25 15:24:35.780727] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.669 [2024-07-25 15:24:35.780736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.669 [2024-07-25 15:24:35.780744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.669 [2024-07-25 15:24:35.784309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.669 [2024-07-25 15:24:35.793555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.669 [2024-07-25 15:24:35.794246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.669 [2024-07-25 15:24:35.794269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.669 [2024-07-25 15:24:35.794277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.669 [2024-07-25 15:24:35.794501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.669 [2024-07-25 15:24:35.794721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.669 [2024-07-25 15:24:35.794731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.669 [2024-07-25 15:24:35.794738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.669 [2024-07-25 15:24:35.798289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.669 [2024-07-25 15:24:35.807505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.669 [2024-07-25 15:24:35.808307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.669 [2024-07-25 15:24:35.808345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.669 [2024-07-25 15:24:35.808357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.669 [2024-07-25 15:24:35.808600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.669 [2024-07-25 15:24:35.808822] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.669 [2024-07-25 15:24:35.808832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.669 [2024-07-25 15:24:35.808839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.669 [2024-07-25 15:24:35.812407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.669 [2024-07-25 15:24:35.821407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.669 [2024-07-25 15:24:35.822136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.669 [2024-07-25 15:24:35.822155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.669 [2024-07-25 15:24:35.822163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.669 [2024-07-25 15:24:35.822388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.669 [2024-07-25 15:24:35.822608] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.669 [2024-07-25 15:24:35.822617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.669 [2024-07-25 15:24:35.822625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.669 [2024-07-25 15:24:35.826168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.669 [2024-07-25 15:24:35.835377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.669 [2024-07-25 15:24:35.836140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.669 [2024-07-25 15:24:35.836177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.669 [2024-07-25 15:24:35.836192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.669 [2024-07-25 15:24:35.836439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.669 [2024-07-25 15:24:35.836663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.669 [2024-07-25 15:24:35.836673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.669 [2024-07-25 15:24:35.836680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.669 [2024-07-25 15:24:35.840235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.669 [2024-07-25 15:24:35.849239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.669 [2024-07-25 15:24:35.850015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.669 [2024-07-25 15:24:35.850053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.669 [2024-07-25 15:24:35.850064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.669 [2024-07-25 15:24:35.850312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.669 [2024-07-25 15:24:35.850536] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.669 [2024-07-25 15:24:35.850545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.669 [2024-07-25 15:24:35.850553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.669 [2024-07-25 15:24:35.854107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.932 [2024-07-25 15:24:35.863117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.932 [2024-07-25 15:24:35.863848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.932 [2024-07-25 15:24:35.863868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.932 [2024-07-25 15:24:35.863876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.932 [2024-07-25 15:24:35.864096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.932 [2024-07-25 15:24:35.864322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.932 [2024-07-25 15:24:35.864332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.932 [2024-07-25 15:24:35.864340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.932 [2024-07-25 15:24:35.867887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.932 [2024-07-25 15:24:35.877113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.932 [2024-07-25 15:24:35.877788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.932 [2024-07-25 15:24:35.877805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.932 [2024-07-25 15:24:35.877812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.932 [2024-07-25 15:24:35.878032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.932 [2024-07-25 15:24:35.878258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.932 [2024-07-25 15:24:35.878273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.932 [2024-07-25 15:24:35.878280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.932 [2024-07-25 15:24:35.881826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.932 [2024-07-25 15:24:35.891040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.932 [2024-07-25 15:24:35.891761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.932 [2024-07-25 15:24:35.891778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.932 [2024-07-25 15:24:35.891785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.932 [2024-07-25 15:24:35.892005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.932 [2024-07-25 15:24:35.892230] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.932 [2024-07-25 15:24:35.892240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.932 [2024-07-25 15:24:35.892248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.932 [2024-07-25 15:24:35.895796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.932 [2024-07-25 15:24:35.905009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.932 [2024-07-25 15:24:35.905783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.932 [2024-07-25 15:24:35.905820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.932 [2024-07-25 15:24:35.905831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.932 [2024-07-25 15:24:35.906071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.932 [2024-07-25 15:24:35.906302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.932 [2024-07-25 15:24:35.906312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.932 [2024-07-25 15:24:35.906319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.932 [2024-07-25 15:24:35.909877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.932 [2024-07-25 15:24:35.918891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.932 [2024-07-25 15:24:35.919584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.932 [2024-07-25 15:24:35.919604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.932 [2024-07-25 15:24:35.919612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.932 [2024-07-25 15:24:35.919833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.932 [2024-07-25 15:24:35.920052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.932 [2024-07-25 15:24:35.920061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.932 [2024-07-25 15:24:35.920068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.932 [2024-07-25 15:24:35.923623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.932 [2024-07-25 15:24:35.932842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.932 [2024-07-25 15:24:35.933539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.932 [2024-07-25 15:24:35.933577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.932 [2024-07-25 15:24:35.933589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.932 [2024-07-25 15:24:35.933828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.932 [2024-07-25 15:24:35.934052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.932 [2024-07-25 15:24:35.934061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.932 [2024-07-25 15:24:35.934069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.932 [2024-07-25 15:24:35.937630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.932 [2024-07-25 15:24:35.946835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.932 [2024-07-25 15:24:35.947623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.932 [2024-07-25 15:24:35.947662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.932 [2024-07-25 15:24:35.947673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:35.947912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:35.948135] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:35.948144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:35.948151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:35.951716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.933 [2024-07-25 15:24:35.960728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.933 [2024-07-25 15:24:35.961525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.933 [2024-07-25 15:24:35.961564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.933 [2024-07-25 15:24:35.961574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:35.961813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:35.962037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:35.962047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:35.962054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:35.965614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.933 [2024-07-25 15:24:35.974625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.933 [2024-07-25 15:24:35.975466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.933 [2024-07-25 15:24:35.975504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.933 [2024-07-25 15:24:35.975515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:35.975758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:35.975982] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:35.975992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:35.976000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:35.979769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.933 [2024-07-25 15:24:35.988565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.933 [2024-07-25 15:24:35.989438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.933 [2024-07-25 15:24:35.989476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.933 [2024-07-25 15:24:35.989487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:35.989726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:35.989949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:35.989959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:35.989966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:35.993526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.933 [2024-07-25 15:24:36.002524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.933 [2024-07-25 15:24:36.003308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.933 [2024-07-25 15:24:36.003347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.933 [2024-07-25 15:24:36.003359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:36.003600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:36.003824] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:36.003834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:36.003841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:36.007399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.933 [2024-07-25 15:24:36.016409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.933 [2024-07-25 15:24:36.017225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.933 [2024-07-25 15:24:36.017264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.933 [2024-07-25 15:24:36.017276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:36.017519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:36.017743] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:36.017752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:36.017764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:36.021327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.933 [2024-07-25 15:24:36.030321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.933 [2024-07-25 15:24:36.030894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.933 [2024-07-25 15:24:36.030915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.933 [2024-07-25 15:24:36.030924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:36.031145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:36.031372] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:36.031382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:36.031389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:36.034936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.933 [2024-07-25 15:24:36.044135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.933 [2024-07-25 15:24:36.044850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.933 [2024-07-25 15:24:36.044867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.933 [2024-07-25 15:24:36.044875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:36.045095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:36.045319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:36.045329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:36.045336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:36.048879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.933 [2024-07-25 15:24:36.058078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.933 [2024-07-25 15:24:36.058774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.933 [2024-07-25 15:24:36.058791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.933 [2024-07-25 15:24:36.058798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:36.059018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:36.059242] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:36.059252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:36.059259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:36.062802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.933 [2024-07-25 15:24:36.072009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.933 [2024-07-25 15:24:36.072702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.933 [2024-07-25 15:24:36.072719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.933 [2024-07-25 15:24:36.072727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.933 [2024-07-25 15:24:36.072946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.933 [2024-07-25 15:24:36.073165] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.933 [2024-07-25 15:24:36.073174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.933 [2024-07-25 15:24:36.073181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.933 [2024-07-25 15:24:36.076729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.934 [2024-07-25 15:24:36.085925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.934 [2024-07-25 15:24:36.086716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.934 [2024-07-25 15:24:36.086754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.934 [2024-07-25 15:24:36.086764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.934 [2024-07-25 15:24:36.087004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.934 [2024-07-25 15:24:36.087237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.934 [2024-07-25 15:24:36.087256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.934 [2024-07-25 15:24:36.087264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.934 [2024-07-25 15:24:36.090819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.934 [2024-07-25 15:24:36.099829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.934 [2024-07-25 15:24:36.100631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.934 [2024-07-25 15:24:36.100669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.934 [2024-07-25 15:24:36.100680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.934 [2024-07-25 15:24:36.100919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.934 [2024-07-25 15:24:36.101142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.934 [2024-07-25 15:24:36.101151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.934 [2024-07-25 15:24:36.101159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.934 [2024-07-25 15:24:36.104718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.934 [2024-07-25 15:24:36.113724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.934 [2024-07-25 15:24:36.114552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.934 [2024-07-25 15:24:36.114590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:43.934 [2024-07-25 15:24:36.114601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:43.934 [2024-07-25 15:24:36.114845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:43.934 [2024-07-25 15:24:36.115069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.934 [2024-07-25 15:24:36.115078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.934 [2024-07-25 15:24:36.115086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.934 [2024-07-25 15:24:36.118645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.196 [2024-07-25 15:24:36.127641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.196 [2024-07-25 15:24:36.128374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.196 [2024-07-25 15:24:36.128394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.196 [2024-07-25 15:24:36.128402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.196 [2024-07-25 15:24:36.128623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.196 [2024-07-25 15:24:36.128843] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.196 [2024-07-25 15:24:36.128852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.196 [2024-07-25 15:24:36.128859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.196 [2024-07-25 15:24:36.132406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.196 [2024-07-25 15:24:36.141607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.196 [2024-07-25 15:24:36.142310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.196 [2024-07-25 15:24:36.142327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.196 [2024-07-25 15:24:36.142334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.196 [2024-07-25 15:24:36.142553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.196 [2024-07-25 15:24:36.142773] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.196 [2024-07-25 15:24:36.142781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.196 [2024-07-25 15:24:36.142789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.196 [2024-07-25 15:24:36.146336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.196 [2024-07-25 15:24:36.155537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.196 [2024-07-25 15:24:36.156224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.196 [2024-07-25 15:24:36.156242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.196 [2024-07-25 15:24:36.156250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.196 [2024-07-25 15:24:36.156471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.196 [2024-07-25 15:24:36.156690] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.196 [2024-07-25 15:24:36.156700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.196 [2024-07-25 15:24:36.156711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.196 [2024-07-25 15:24:36.160260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.196 [2024-07-25 15:24:36.169469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.196 [2024-07-25 15:24:36.170253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.196 [2024-07-25 15:24:36.170291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.196 [2024-07-25 15:24:36.170304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.196 [2024-07-25 15:24:36.170545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.196 [2024-07-25 15:24:36.170768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.196 [2024-07-25 15:24:36.170777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.196 [2024-07-25 15:24:36.170785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.174348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.183350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.184002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.184040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.197 [2024-07-25 15:24:36.184051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.197 [2024-07-25 15:24:36.184298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.197 [2024-07-25 15:24:36.184522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.197 [2024-07-25 15:24:36.184531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.197 [2024-07-25 15:24:36.184539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.188088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.197298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.198043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.198081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.197 [2024-07-25 15:24:36.198092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.197 [2024-07-25 15:24:36.198339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.197 [2024-07-25 15:24:36.198563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.197 [2024-07-25 15:24:36.198573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.197 [2024-07-25 15:24:36.198580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.202132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.211134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.211841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.211865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.197 [2024-07-25 15:24:36.211873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.197 [2024-07-25 15:24:36.212093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.197 [2024-07-25 15:24:36.212318] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.197 [2024-07-25 15:24:36.212328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.197 [2024-07-25 15:24:36.212335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.215881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.225083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.225773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.225790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.197 [2024-07-25 15:24:36.225798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.197 [2024-07-25 15:24:36.226017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.197 [2024-07-25 15:24:36.226241] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.197 [2024-07-25 15:24:36.226251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.197 [2024-07-25 15:24:36.226258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.229804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.238999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.239853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.239892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.197 [2024-07-25 15:24:36.239903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.197 [2024-07-25 15:24:36.240144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.197 [2024-07-25 15:24:36.240377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.197 [2024-07-25 15:24:36.240387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.197 [2024-07-25 15:24:36.240395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.243947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.252947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.253775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.253813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.197 [2024-07-25 15:24:36.253824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.197 [2024-07-25 15:24:36.254063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.197 [2024-07-25 15:24:36.254299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.197 [2024-07-25 15:24:36.254309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.197 [2024-07-25 15:24:36.254317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.257868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.266867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.267585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.267623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.197 [2024-07-25 15:24:36.267634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.197 [2024-07-25 15:24:36.267873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.197 [2024-07-25 15:24:36.268096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.197 [2024-07-25 15:24:36.268105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.197 [2024-07-25 15:24:36.268113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.271681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.280772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.281584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.281622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.197 [2024-07-25 15:24:36.281632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.197 [2024-07-25 15:24:36.281872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.197 [2024-07-25 15:24:36.282095] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.197 [2024-07-25 15:24:36.282104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.197 [2024-07-25 15:24:36.282112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.285671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.294665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.295463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.295509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.197 [2024-07-25 15:24:36.295519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.197 [2024-07-25 15:24:36.295758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.197 [2024-07-25 15:24:36.295981] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.197 [2024-07-25 15:24:36.295991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.197 [2024-07-25 15:24:36.295999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.197 [2024-07-25 15:24:36.299562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.197 [2024-07-25 15:24:36.308561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.197 [2024-07-25 15:24:36.309219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.197 [2024-07-25 15:24:36.309257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.198 [2024-07-25 15:24:36.309269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.198 [2024-07-25 15:24:36.309512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.198 [2024-07-25 15:24:36.309735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.198 [2024-07-25 15:24:36.309745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.198 [2024-07-25 15:24:36.309753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.198 [2024-07-25 15:24:36.313322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.198 [2024-07-25 15:24:36.322529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.198 [2024-07-25 15:24:36.323209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.198 [2024-07-25 15:24:36.323228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.198 [2024-07-25 15:24:36.323236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.198 [2024-07-25 15:24:36.323457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.198 [2024-07-25 15:24:36.323677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.198 [2024-07-25 15:24:36.323685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.198 [2024-07-25 15:24:36.323692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.198 [2024-07-25 15:24:36.327241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.198 [2024-07-25 15:24:36.336440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.198 [2024-07-25 15:24:36.337138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.198 [2024-07-25 15:24:36.337154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.198 [2024-07-25 15:24:36.337162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.198 [2024-07-25 15:24:36.337386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.198 [2024-07-25 15:24:36.337606] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.198 [2024-07-25 15:24:36.337615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.198 [2024-07-25 15:24:36.337622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.198 [2024-07-25 15:24:36.341164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.198 [2024-07-25 15:24:36.350370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.198 [2024-07-25 15:24:36.351079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.198 [2024-07-25 15:24:36.351094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.198 [2024-07-25 15:24:36.351106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.198 [2024-07-25 15:24:36.351330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.198 [2024-07-25 15:24:36.351551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.198 [2024-07-25 15:24:36.351559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.198 [2024-07-25 15:24:36.351567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.198 [2024-07-25 15:24:36.355109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.198 [2024-07-25 15:24:36.364314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.198 [2024-07-25 15:24:36.365093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.198 [2024-07-25 15:24:36.365131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.198 [2024-07-25 15:24:36.365142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.198 [2024-07-25 15:24:36.365392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.198 [2024-07-25 15:24:36.365616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.198 [2024-07-25 15:24:36.365625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.198 [2024-07-25 15:24:36.365633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.198 [2024-07-25 15:24:36.369193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.198 [2024-07-25 15:24:36.378191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.198 [2024-07-25 15:24:36.378651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.198 [2024-07-25 15:24:36.378674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.198 [2024-07-25 15:24:36.378683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.198 [2024-07-25 15:24:36.378906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.198 [2024-07-25 15:24:36.379127] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.198 [2024-07-25 15:24:36.379136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.198 [2024-07-25 15:24:36.379144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.198 [2024-07-25 15:24:36.382700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.460 [2024-07-25 15:24:36.392108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.460 [2024-07-25 15:24:36.392823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.460 [2024-07-25 15:24:36.392839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.460 [2024-07-25 15:24:36.392847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.460 [2024-07-25 15:24:36.393066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.460 [2024-07-25 15:24:36.393292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.460 [2024-07-25 15:24:36.393310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.460 [2024-07-25 15:24:36.393318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.460 [2024-07-25 15:24:36.396862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.460 [2024-07-25 15:24:36.406058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.460 [2024-07-25 15:24:36.406784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.460 [2024-07-25 15:24:36.406801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.460 [2024-07-25 15:24:36.406809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.460 [2024-07-25 15:24:36.407029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.460 [2024-07-25 15:24:36.407255] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.460 [2024-07-25 15:24:36.407265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.460 [2024-07-25 15:24:36.407272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.460 [2024-07-25 15:24:36.410813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.420014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.420587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.420605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.420613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.420832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.421052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.461 [2024-07-25 15:24:36.421061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.461 [2024-07-25 15:24:36.421068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.461 [2024-07-25 15:24:36.424618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.433814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.434570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.434608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.434618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.434858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.435081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.461 [2024-07-25 15:24:36.435090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.461 [2024-07-25 15:24:36.435098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.461 [2024-07-25 15:24:36.438658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.447662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.448471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.448509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.448520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.448760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.448983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.461 [2024-07-25 15:24:36.448992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.461 [2024-07-25 15:24:36.449000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.461 [2024-07-25 15:24:36.452558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.461551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.462303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.462340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.462351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.462590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.462813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.461 [2024-07-25 15:24:36.462822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.461 [2024-07-25 15:24:36.462830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.461 [2024-07-25 15:24:36.466389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.475393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.476001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.476038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.476049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.476298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.476522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.461 [2024-07-25 15:24:36.476532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.461 [2024-07-25 15:24:36.476540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.461 [2024-07-25 15:24:36.480090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.489298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.490058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.490096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.490108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.490360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.490584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.461 [2024-07-25 15:24:36.490594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.461 [2024-07-25 15:24:36.490601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.461 [2024-07-25 15:24:36.494154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.503153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.503981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.504019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.504030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.504277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.504500] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.461 [2024-07-25 15:24:36.504510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.461 [2024-07-25 15:24:36.504517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.461 [2024-07-25 15:24:36.508065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.517064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.517861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.517900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.517910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.518150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.518383] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.461 [2024-07-25 15:24:36.518393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.461 [2024-07-25 15:24:36.518400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.461 [2024-07-25 15:24:36.521950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.530944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.531719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.531757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.531767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.532006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.532239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.461 [2024-07-25 15:24:36.532249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.461 [2024-07-25 15:24:36.532261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.461 [2024-07-25 15:24:36.535812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.461 [2024-07-25 15:24:36.544806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.461 [2024-07-25 15:24:36.545371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.461 [2024-07-25 15:24:36.545408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.461 [2024-07-25 15:24:36.545421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.461 [2024-07-25 15:24:36.545662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.461 [2024-07-25 15:24:36.545886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.462 [2024-07-25 15:24:36.545895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.462 [2024-07-25 15:24:36.545903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.462 [2024-07-25 15:24:36.549464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.462 [2024-07-25 15:24:36.558666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.462 [2024-07-25 15:24:36.559463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.462 [2024-07-25 15:24:36.559501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.462 [2024-07-25 15:24:36.559512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.462 [2024-07-25 15:24:36.559752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.462 [2024-07-25 15:24:36.559974] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.462 [2024-07-25 15:24:36.559984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.462 [2024-07-25 15:24:36.559991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.462 [2024-07-25 15:24:36.563553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.462 [2024-07-25 15:24:36.572563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.462 [2024-07-25 15:24:36.573427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.462 [2024-07-25 15:24:36.573466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.462 [2024-07-25 15:24:36.573476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.462 [2024-07-25 15:24:36.573716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.462 [2024-07-25 15:24:36.573939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.462 [2024-07-25 15:24:36.573949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.462 [2024-07-25 15:24:36.573957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.462 [2024-07-25 15:24:36.577515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.462 [2024-07-25 15:24:36.586507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.462 [2024-07-25 15:24:36.587294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.462 [2024-07-25 15:24:36.587332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.462 [2024-07-25 15:24:36.587344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.462 [2024-07-25 15:24:36.587586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.462 [2024-07-25 15:24:36.587809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.462 [2024-07-25 15:24:36.587818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.462 [2024-07-25 15:24:36.587826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.462 [2024-07-25 15:24:36.591384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.462 [2024-07-25 15:24:36.600386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.462 [2024-07-25 15:24:36.601230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.462 [2024-07-25 15:24:36.601267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.462 [2024-07-25 15:24:36.601279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.462 [2024-07-25 15:24:36.601520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.462 [2024-07-25 15:24:36.601743] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.462 [2024-07-25 15:24:36.601752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.462 [2024-07-25 15:24:36.601760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.462 [2024-07-25 15:24:36.605320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.462 [2024-07-25 15:24:36.614325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.462 [2024-07-25 15:24:36.615105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.462 [2024-07-25 15:24:36.615142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.462 [2024-07-25 15:24:36.615155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.462 [2024-07-25 15:24:36.615404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.462 [2024-07-25 15:24:36.615628] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.462 [2024-07-25 15:24:36.615637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.462 [2024-07-25 15:24:36.615645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.462 [2024-07-25 15:24:36.619193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.462 [2024-07-25 15:24:36.628186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.462 [2024-07-25 15:24:36.628890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.462 [2024-07-25 15:24:36.628910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.462 [2024-07-25 15:24:36.628918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.462 [2024-07-25 15:24:36.629143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.462 [2024-07-25 15:24:36.629369] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.462 [2024-07-25 15:24:36.629379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.462 [2024-07-25 15:24:36.629386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.462 [2024-07-25 15:24:36.632930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.462 [2024-07-25 15:24:36.642130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.462 [2024-07-25 15:24:36.642756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.462 [2024-07-25 15:24:36.642795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.462 [2024-07-25 15:24:36.642806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.462 [2024-07-25 15:24:36.643045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.462 [2024-07-25 15:24:36.643277] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.462 [2024-07-25 15:24:36.643288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.462 [2024-07-25 15:24:36.643295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.462 [2024-07-25 15:24:36.646846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.725 [2024-07-25 15:24:36.656047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.725 [2024-07-25 15:24:36.656827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.725 [2024-07-25 15:24:36.656864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.725 [2024-07-25 15:24:36.656875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.725 [2024-07-25 15:24:36.657115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.725 [2024-07-25 15:24:36.657348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.725 [2024-07-25 15:24:36.657358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.725 [2024-07-25 15:24:36.657366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.725 [2024-07-25 15:24:36.660919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.725 [2024-07-25 15:24:36.669922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.725 [2024-07-25 15:24:36.670695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.725 [2024-07-25 15:24:36.670733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.725 [2024-07-25 15:24:36.670745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.725 [2024-07-25 15:24:36.670984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.725 [2024-07-25 15:24:36.671216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.725 [2024-07-25 15:24:36.671226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.725 [2024-07-25 15:24:36.671239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.725 [2024-07-25 15:24:36.674789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.725 [2024-07-25 15:24:36.683784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.725 [2024-07-25 15:24:36.684563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.725 [2024-07-25 15:24:36.684601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.725 [2024-07-25 15:24:36.684612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.725 [2024-07-25 15:24:36.684851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.725 [2024-07-25 15:24:36.685074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.725 [2024-07-25 15:24:36.685084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.725 [2024-07-25 15:24:36.685091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.725 [2024-07-25 15:24:36.688650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.725 [2024-07-25 15:24:36.697649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.725 [2024-07-25 15:24:36.698453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.725 [2024-07-25 15:24:36.698491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.725 [2024-07-25 15:24:36.698502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.725 [2024-07-25 15:24:36.698741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.725 [2024-07-25 15:24:36.698964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.725 [2024-07-25 15:24:36.698973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.725 [2024-07-25 15:24:36.698981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.725 [2024-07-25 15:24:36.702542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.725 [2024-07-25 15:24:36.711538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.725 [2024-07-25 15:24:36.712299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.725 [2024-07-25 15:24:36.712337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.725 [2024-07-25 15:24:36.712348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.725 [2024-07-25 15:24:36.712587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.725 [2024-07-25 15:24:36.712819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.725 [2024-07-25 15:24:36.712830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.725 [2024-07-25 15:24:36.712837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.725 [2024-07-25 15:24:36.716395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.725 [2024-07-25 15:24:36.725389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.725 [2024-07-25 15:24:36.726217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.725 [2024-07-25 15:24:36.726260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.725 [2024-07-25 15:24:36.726272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.725 [2024-07-25 15:24:36.726512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.725 [2024-07-25 15:24:36.726735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.725 [2024-07-25 15:24:36.726744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.725 [2024-07-25 15:24:36.726752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.725 [2024-07-25 15:24:36.730311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.725 [2024-07-25 15:24:36.739303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.725 [2024-07-25 15:24:36.740115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.725 [2024-07-25 15:24:36.740153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.725 [2024-07-25 15:24:36.740164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.725 [2024-07-25 15:24:36.740412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.725 [2024-07-25 15:24:36.740636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.725 [2024-07-25 15:24:36.740646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.725 [2024-07-25 15:24:36.740654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.725 [2024-07-25 15:24:36.744208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.725 [2024-07-25 15:24:36.753206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.725 [2024-07-25 15:24:36.753905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.725 [2024-07-25 15:24:36.753942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.725 [2024-07-25 15:24:36.753953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.725 [2024-07-25 15:24:36.754193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.725 [2024-07-25 15:24:36.754424] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.725 [2024-07-25 15:24:36.754434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.725 [2024-07-25 15:24:36.754442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.757992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.766992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.767768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.767805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.767816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.726 [2024-07-25 15:24:36.768056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.726 [2024-07-25 15:24:36.768292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.726 [2024-07-25 15:24:36.768303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.726 [2024-07-25 15:24:36.768311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.771873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.780866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.781645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.781683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.781694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.726 [2024-07-25 15:24:36.781932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.726 [2024-07-25 15:24:36.782155] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.726 [2024-07-25 15:24:36.782165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.726 [2024-07-25 15:24:36.782173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.785732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.794726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.795535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.795573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.795584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.726 [2024-07-25 15:24:36.795824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.726 [2024-07-25 15:24:36.796047] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.726 [2024-07-25 15:24:36.796056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.726 [2024-07-25 15:24:36.796064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.799623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.808622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.809435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.809473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.809484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.726 [2024-07-25 15:24:36.809723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.726 [2024-07-25 15:24:36.809947] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.726 [2024-07-25 15:24:36.809956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.726 [2024-07-25 15:24:36.809963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.813537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.822421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.823218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.823257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.823269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.726 [2024-07-25 15:24:36.823511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.726 [2024-07-25 15:24:36.823735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.726 [2024-07-25 15:24:36.823744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.726 [2024-07-25 15:24:36.823752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.827315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.836309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.837112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.837150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.837160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.726 [2024-07-25 15:24:36.837409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.726 [2024-07-25 15:24:36.837632] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.726 [2024-07-25 15:24:36.837642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.726 [2024-07-25 15:24:36.837649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.841202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.850191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.851007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.851045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.851056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.726 [2024-07-25 15:24:36.851305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.726 [2024-07-25 15:24:36.851529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.726 [2024-07-25 15:24:36.851538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.726 [2024-07-25 15:24:36.851546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.855093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.864092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.864907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.864945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.864961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.726 [2024-07-25 15:24:36.865210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.726 [2024-07-25 15:24:36.865435] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.726 [2024-07-25 15:24:36.865444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.726 [2024-07-25 15:24:36.865452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.869004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.878009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.878789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.878827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.878838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.726 [2024-07-25 15:24:36.879077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.726 [2024-07-25 15:24:36.879310] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.726 [2024-07-25 15:24:36.879320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.726 [2024-07-25 15:24:36.879328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.726 [2024-07-25 15:24:36.882879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.726 [2024-07-25 15:24:36.891870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.726 [2024-07-25 15:24:36.892657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.726 [2024-07-25 15:24:36.892695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.726 [2024-07-25 15:24:36.892706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.727 [2024-07-25 15:24:36.892945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.727 [2024-07-25 15:24:36.893168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.727 [2024-07-25 15:24:36.893178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.727 [2024-07-25 15:24:36.893185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.727 [2024-07-25 15:24:36.896746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.727 [2024-07-25 15:24:36.905743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.727 [2024-07-25 15:24:36.906529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.727 [2024-07-25 15:24:36.906567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.727 [2024-07-25 15:24:36.906578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.727 [2024-07-25 15:24:36.906817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.727 [2024-07-25 15:24:36.907040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.727 [2024-07-25 15:24:36.907054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.727 [2024-07-25 15:24:36.907062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.727 [2024-07-25 15:24:36.910620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.989 [2024-07-25 15:24:36.919626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.989 [2024-07-25 15:24:36.920428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.989 [2024-07-25 15:24:36.920467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.989 [2024-07-25 15:24:36.920477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.989 [2024-07-25 15:24:36.920716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.989 [2024-07-25 15:24:36.920939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.989 [2024-07-25 15:24:36.920949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.989 [2024-07-25 15:24:36.920956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.989 [2024-07-25 15:24:36.924517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.989 [2024-07-25 15:24:36.933513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.989 [2024-07-25 15:24:36.934299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.989 [2024-07-25 15:24:36.934337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.989 [2024-07-25 15:24:36.934349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.989 [2024-07-25 15:24:36.934590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.989 [2024-07-25 15:24:36.934813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.989 [2024-07-25 15:24:36.934822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.989 [2024-07-25 15:24:36.934829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.989 [2024-07-25 15:24:36.938388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.989 [2024-07-25 15:24:36.947429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.989 [2024-07-25 15:24:36.948244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.989 [2024-07-25 15:24:36.948282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.989 [2024-07-25 15:24:36.948295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.989 [2024-07-25 15:24:36.948535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.989 [2024-07-25 15:24:36.948760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.989 [2024-07-25 15:24:36.948770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.989 [2024-07-25 15:24:36.948778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.989 [2024-07-25 15:24:36.952337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.989 [2024-07-25 15:24:36.961335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.989 [2024-07-25 15:24:36.962149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.989 [2024-07-25 15:24:36.962187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.989 [2024-07-25 15:24:36.962199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.989 [2024-07-25 15:24:36.962449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.989 [2024-07-25 15:24:36.962672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.989 [2024-07-25 15:24:36.962681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.989 [2024-07-25 15:24:36.962689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.989 [2024-07-25 15:24:36.966241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:36.975242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:36.976052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:36.976090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:36.976101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.990 [2024-07-25 15:24:36.976350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.990 [2024-07-25 15:24:36.976574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.990 [2024-07-25 15:24:36.976583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.990 [2024-07-25 15:24:36.976591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.990 [2024-07-25 15:24:36.980350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:36.989146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:36.989868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:36.989888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:36.989896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.990 [2024-07-25 15:24:36.990117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.990 [2024-07-25 15:24:36.990342] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.990 [2024-07-25 15:24:36.990351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.990 [2024-07-25 15:24:36.990358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.990 [2024-07-25 15:24:36.993903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:37.003102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:37.003852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:37.003891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:37.003901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.990 [2024-07-25 15:24:37.004145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.990 [2024-07-25 15:24:37.004377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.990 [2024-07-25 15:24:37.004387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.990 [2024-07-25 15:24:37.004395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.990 [2024-07-25 15:24:37.007944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:37.016989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:37.017769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:37.017807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:37.017817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.990 [2024-07-25 15:24:37.018056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.990 [2024-07-25 15:24:37.018289] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.990 [2024-07-25 15:24:37.018299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.990 [2024-07-25 15:24:37.018307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.990 [2024-07-25 15:24:37.021857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:37.030849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:37.031641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:37.031680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:37.031690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.990 [2024-07-25 15:24:37.031929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.990 [2024-07-25 15:24:37.032152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.990 [2024-07-25 15:24:37.032162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.990 [2024-07-25 15:24:37.032169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.990 [2024-07-25 15:24:37.035729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:37.044726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:37.045322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:37.045359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:37.045372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.990 [2024-07-25 15:24:37.045613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.990 [2024-07-25 15:24:37.045836] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.990 [2024-07-25 15:24:37.045846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.990 [2024-07-25 15:24:37.045858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.990 [2024-07-25 15:24:37.049422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:37.058622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:37.059421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:37.059459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:37.059469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.990 [2024-07-25 15:24:37.059709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.990 [2024-07-25 15:24:37.059932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.990 [2024-07-25 15:24:37.059941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.990 [2024-07-25 15:24:37.059949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.990 [2024-07-25 15:24:37.063508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:37.072517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:37.073302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:37.073340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:37.073353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.990 [2024-07-25 15:24:37.073593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.990 [2024-07-25 15:24:37.073817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.990 [2024-07-25 15:24:37.073827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.990 [2024-07-25 15:24:37.073834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.990 [2024-07-25 15:24:37.077393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:37.086390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:37.087167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:37.087212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:37.087223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.990 [2024-07-25 15:24:37.087463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.990 [2024-07-25 15:24:37.087686] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.990 [2024-07-25 15:24:37.087695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.990 [2024-07-25 15:24:37.087703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.990 [2024-07-25 15:24:37.091254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.990 [2024-07-25 15:24:37.100247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.990 [2024-07-25 15:24:37.101039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.990 [2024-07-25 15:24:37.101077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.990 [2024-07-25 15:24:37.101088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.991 [2024-07-25 15:24:37.101334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.991 [2024-07-25 15:24:37.101558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.991 [2024-07-25 15:24:37.101568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.991 [2024-07-25 15:24:37.101576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.991 [2024-07-25 15:24:37.105127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.991 [2024-07-25 15:24:37.114133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.991 [2024-07-25 15:24:37.114952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.991 [2024-07-25 15:24:37.114990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.991 [2024-07-25 15:24:37.115001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.991 [2024-07-25 15:24:37.115249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.991 [2024-07-25 15:24:37.115473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.991 [2024-07-25 15:24:37.115482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.991 [2024-07-25 15:24:37.115489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.991 [2024-07-25 15:24:37.119039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.991 [2024-07-25 15:24:37.128033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.991 [2024-07-25 15:24:37.128876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.991 [2024-07-25 15:24:37.128914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.991 [2024-07-25 15:24:37.128925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.991 [2024-07-25 15:24:37.129164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.991 [2024-07-25 15:24:37.129397] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.991 [2024-07-25 15:24:37.129408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.991 [2024-07-25 15:24:37.129415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.991 [2024-07-25 15:24:37.132966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.991 [2024-07-25 15:24:37.141966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.991 [2024-07-25 15:24:37.142753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.991 [2024-07-25 15:24:37.142791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.991 [2024-07-25 15:24:37.142802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.991 [2024-07-25 15:24:37.143041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.991 [2024-07-25 15:24:37.143280] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.991 [2024-07-25 15:24:37.143291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.991 [2024-07-25 15:24:37.143300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.991 [2024-07-25 15:24:37.146849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.991 [2024-07-25 15:24:37.155843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.991 [2024-07-25 15:24:37.156555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.991 [2024-07-25 15:24:37.156575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.991 [2024-07-25 15:24:37.156583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.991 [2024-07-25 15:24:37.156803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.991 [2024-07-25 15:24:37.157023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.991 [2024-07-25 15:24:37.157032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.991 [2024-07-25 15:24:37.157040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.991 [2024-07-25 15:24:37.160589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.991 [2024-07-25 15:24:37.169791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.991 [2024-07-25 15:24:37.170559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.991 [2024-07-25 15:24:37.170597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:44.991 [2024-07-25 15:24:37.170608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:44.991 [2024-07-25 15:24:37.170848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:44.991 [2024-07-25 15:24:37.171070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.991 [2024-07-25 15:24:37.171080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.991 [2024-07-25 15:24:37.171088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.991 [2024-07-25 15:24:37.174660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.254 [2024-07-25 15:24:37.183662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.254 [2024-07-25 15:24:37.184449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.254 [2024-07-25 15:24:37.184487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.254 [2024-07-25 15:24:37.184498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.254 [2024-07-25 15:24:37.184737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.254 [2024-07-25 15:24:37.184960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.254 [2024-07-25 15:24:37.184970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.254 [2024-07-25 15:24:37.184977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.254 [2024-07-25 15:24:37.188544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.254 [2024-07-25 15:24:37.197536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.254 [2024-07-25 15:24:37.198281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.254 [2024-07-25 15:24:37.198319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.254 [2024-07-25 15:24:37.198330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.254 [2024-07-25 15:24:37.198569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.254 [2024-07-25 15:24:37.198791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.254 [2024-07-25 15:24:37.198801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.254 [2024-07-25 15:24:37.198809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.254 [2024-07-25 15:24:37.202367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.254 [2024-07-25 15:24:37.211361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.254 [2024-07-25 15:24:37.212164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.254 [2024-07-25 15:24:37.212210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.254 [2024-07-25 15:24:37.212221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.254 [2024-07-25 15:24:37.212460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.254 [2024-07-25 15:24:37.212684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.254 [2024-07-25 15:24:37.212693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.254 [2024-07-25 15:24:37.212700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.254 [2024-07-25 15:24:37.216269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.254 [2024-07-25 15:24:37.225274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.254 [2024-07-25 15:24:37.225906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.254 [2024-07-25 15:24:37.225944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.254 [2024-07-25 15:24:37.225954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.254 [2024-07-25 15:24:37.226193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.254 [2024-07-25 15:24:37.226427] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.254 [2024-07-25 15:24:37.226437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.254 [2024-07-25 15:24:37.226444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.254 [2024-07-25 15:24:37.229999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.239209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.240011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.240050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.240066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.240315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.240540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.240549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.255 [2024-07-25 15:24:37.240557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.255 [2024-07-25 15:24:37.244105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.253107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.253926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.253964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.253975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.254222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.254446] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.254456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.255 [2024-07-25 15:24:37.254464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.255 [2024-07-25 15:24:37.258016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.267010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.267804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.267842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.267852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.268092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.268322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.268332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.255 [2024-07-25 15:24:37.268340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.255 [2024-07-25 15:24:37.271904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.280903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.281677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.281715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.281726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.281966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.282193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.282212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.255 [2024-07-25 15:24:37.282220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.255 [2024-07-25 15:24:37.285771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.294768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.295480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.295518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.295529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.295768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.295992] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.296001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.255 [2024-07-25 15:24:37.296009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.255 [2024-07-25 15:24:37.299565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.308569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.309302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.309340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.309352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.309593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.309820] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.309831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.255 [2024-07-25 15:24:37.309838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.255 [2024-07-25 15:24:37.313394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.322477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.323172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.323192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.323206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.323427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.323647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.323655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.255 [2024-07-25 15:24:37.323663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.255 [2024-07-25 15:24:37.327208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.336412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.337088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.337104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.337112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.337336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.337556] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.337566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.255 [2024-07-25 15:24:37.337573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.255 [2024-07-25 15:24:37.341115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.350315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.351068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.351106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.351117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.351365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.351589] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.351598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.255 [2024-07-25 15:24:37.351606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.255 [2024-07-25 15:24:37.355158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.255 [2024-07-25 15:24:37.364156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.255 [2024-07-25 15:24:37.364953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.255 [2024-07-25 15:24:37.364991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.255 [2024-07-25 15:24:37.365002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.255 [2024-07-25 15:24:37.365248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.255 [2024-07-25 15:24:37.365473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.255 [2024-07-25 15:24:37.365482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.256 [2024-07-25 15:24:37.365490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.256 [2024-07-25 15:24:37.369042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.256 [2024-07-25 15:24:37.378060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.256 [2024-07-25 15:24:37.378858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.256 [2024-07-25 15:24:37.378898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.256 [2024-07-25 15:24:37.378913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.256 [2024-07-25 15:24:37.379152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.256 [2024-07-25 15:24:37.379384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.256 [2024-07-25 15:24:37.379394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.256 [2024-07-25 15:24:37.379402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.256 [2024-07-25 15:24:37.382956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.256 [2024-07-25 15:24:37.391970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.256 [2024-07-25 15:24:37.392677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.256 [2024-07-25 15:24:37.392697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.256 [2024-07-25 15:24:37.392705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.256 [2024-07-25 15:24:37.392926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.256 [2024-07-25 15:24:37.393146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.256 [2024-07-25 15:24:37.393155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.256 [2024-07-25 15:24:37.393162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.256 [2024-07-25 15:24:37.396714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.256 [2024-07-25 15:24:37.405926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.256 [2024-07-25 15:24:37.406635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.256 [2024-07-25 15:24:37.406652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.256 [2024-07-25 15:24:37.406659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.256 [2024-07-25 15:24:37.406879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.256 [2024-07-25 15:24:37.407098] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.256 [2024-07-25 15:24:37.407107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.256 [2024-07-25 15:24:37.407114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.256 [2024-07-25 15:24:37.410666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.256 [2024-07-25 15:24:37.419876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.256 [2024-07-25 15:24:37.420495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.256 [2024-07-25 15:24:37.420512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.256 [2024-07-25 15:24:37.420520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.256 [2024-07-25 15:24:37.420739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.256 [2024-07-25 15:24:37.420959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.256 [2024-07-25 15:24:37.420971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.256 [2024-07-25 15:24:37.420978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.256 [2024-07-25 15:24:37.424532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.256 [2024-07-25 15:24:37.433736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.256 [2024-07-25 15:24:37.434525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.256 [2024-07-25 15:24:37.434564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.256 [2024-07-25 15:24:37.434575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.256 [2024-07-25 15:24:37.434814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.256 [2024-07-25 15:24:37.435038] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.256 [2024-07-25 15:24:37.435047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.256 [2024-07-25 15:24:37.435055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.256 [2024-07-25 15:24:37.438615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.518 [2024-07-25 15:24:37.447615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.518 [2024-07-25 15:24:37.448379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.448416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.448427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.448666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.448889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.448900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.448908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.519 [2024-07-25 15:24:37.452469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.519 [2024-07-25 15:24:37.461474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.519 [2024-07-25 15:24:37.462260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.462299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.462311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.462552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.462776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.462785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.462793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.519 [2024-07-25 15:24:37.466353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.519 [2024-07-25 15:24:37.475367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.519 [2024-07-25 15:24:37.476109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.476148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.476160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.476412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.476637] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.476647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.476654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.519 [2024-07-25 15:24:37.480208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.519 [2024-07-25 15:24:37.489209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.519 [2024-07-25 15:24:37.490055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.490094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.490105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.490351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.490576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.490585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.490593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.519 [2024-07-25 15:24:37.494141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.519 [2024-07-25 15:24:37.503140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.519 [2024-07-25 15:24:37.503878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.503898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.503906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.504126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.504352] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.504362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.504370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.519 [2024-07-25 15:24:37.507915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.519 [2024-07-25 15:24:37.517128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.519 [2024-07-25 15:24:37.517769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.517807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.517818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.518062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.518292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.518302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.518311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.519 [2024-07-25 15:24:37.521863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.519 [2024-07-25 15:24:37.531073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.519 [2024-07-25 15:24:37.531760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.531780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.531788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.532008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.532234] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.532243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.532250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.519 [2024-07-25 15:24:37.535793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.519 [2024-07-25 15:24:37.544997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.519 [2024-07-25 15:24:37.545685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.545702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.545710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.545929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.546149] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.546157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.546165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.519 [2024-07-25 15:24:37.549713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.519 [2024-07-25 15:24:37.558913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.519 [2024-07-25 15:24:37.559641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.559679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.559690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.559929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.560152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.560162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.560173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.519 [2024-07-25 15:24:37.563735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.519 [2024-07-25 15:24:37.572743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.519 [2024-07-25 15:24:37.573554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.519 [2024-07-25 15:24:37.573592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.519 [2024-07-25 15:24:37.573602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.519 [2024-07-25 15:24:37.573841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.519 [2024-07-25 15:24:37.574064] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.519 [2024-07-25 15:24:37.574074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.519 [2024-07-25 15:24:37.574082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.577641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.520 [2024-07-25 15:24:37.586635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.520 [2024-07-25 15:24:37.587441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.520 [2024-07-25 15:24:37.587479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.520 [2024-07-25 15:24:37.587491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.520 [2024-07-25 15:24:37.587732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.520 [2024-07-25 15:24:37.587955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.520 [2024-07-25 15:24:37.587964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.520 [2024-07-25 15:24:37.587972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.591535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.520 [2024-07-25 15:24:37.600540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.520 [2024-07-25 15:24:37.601299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.520 [2024-07-25 15:24:37.601337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.520 [2024-07-25 15:24:37.601349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.520 [2024-07-25 15:24:37.601590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.520 [2024-07-25 15:24:37.601813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.520 [2024-07-25 15:24:37.601823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.520 [2024-07-25 15:24:37.601831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.605389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.520 [2024-07-25 15:24:37.614395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.520 [2024-07-25 15:24:37.615071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.520 [2024-07-25 15:24:37.615094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.520 [2024-07-25 15:24:37.615102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.520 [2024-07-25 15:24:37.615330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.520 [2024-07-25 15:24:37.615551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.520 [2024-07-25 15:24:37.615560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.520 [2024-07-25 15:24:37.615567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.619123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.520 [2024-07-25 15:24:37.628344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.520 [2024-07-25 15:24:37.629157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.520 [2024-07-25 15:24:37.629196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.520 [2024-07-25 15:24:37.629216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.520 [2024-07-25 15:24:37.629456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.520 [2024-07-25 15:24:37.629679] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.520 [2024-07-25 15:24:37.629689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.520 [2024-07-25 15:24:37.629697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.633261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.520 [2024-07-25 15:24:37.642269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.520 [2024-07-25 15:24:37.642994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.520 [2024-07-25 15:24:37.643013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.520 [2024-07-25 15:24:37.643022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.520 [2024-07-25 15:24:37.643249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.520 [2024-07-25 15:24:37.643469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.520 [2024-07-25 15:24:37.643479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.520 [2024-07-25 15:24:37.643487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.647039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.520 [2024-07-25 15:24:37.656076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.520 [2024-07-25 15:24:37.656862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.520 [2024-07-25 15:24:37.656900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.520 [2024-07-25 15:24:37.656911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.520 [2024-07-25 15:24:37.657150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.520 [2024-07-25 15:24:37.657389] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.520 [2024-07-25 15:24:37.657400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.520 [2024-07-25 15:24:37.657407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.660962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.520 [2024-07-25 15:24:37.669973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.520 [2024-07-25 15:24:37.670662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.520 [2024-07-25 15:24:37.670682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.520 [2024-07-25 15:24:37.670690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.520 [2024-07-25 15:24:37.670909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.520 [2024-07-25 15:24:37.671129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.520 [2024-07-25 15:24:37.671139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.520 [2024-07-25 15:24:37.671146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.674714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.520 [2024-07-25 15:24:37.683929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.520 [2024-07-25 15:24:37.684635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.520 [2024-07-25 15:24:37.684653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.520 [2024-07-25 15:24:37.684660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.520 [2024-07-25 15:24:37.684880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.520 [2024-07-25 15:24:37.685100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.520 [2024-07-25 15:24:37.685109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.520 [2024-07-25 15:24:37.685116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.688668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.520 [2024-07-25 15:24:37.697884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.520 [2024-07-25 15:24:37.698557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.520 [2024-07-25 15:24:37.698574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.520 [2024-07-25 15:24:37.698581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.520 [2024-07-25 15:24:37.698801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.520 [2024-07-25 15:24:37.699020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.520 [2024-07-25 15:24:37.699030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.520 [2024-07-25 15:24:37.699037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.520 [2024-07-25 15:24:37.702596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.783 [2024-07-25 15:24:37.711809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.783 [2024-07-25 15:24:37.712488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.783 [2024-07-25 15:24:37.712504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.783 [2024-07-25 15:24:37.712512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.783 [2024-07-25 15:24:37.712731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.783 [2024-07-25 15:24:37.712951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.783 [2024-07-25 15:24:37.712960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.783 [2024-07-25 15:24:37.712968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.783 [2024-07-25 15:24:37.716529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.783 [2024-07-25 15:24:37.725745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.783 [2024-07-25 15:24:37.726537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.783 [2024-07-25 15:24:37.726575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.783 [2024-07-25 15:24:37.726586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.783 [2024-07-25 15:24:37.726825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.783 [2024-07-25 15:24:37.727048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.783 [2024-07-25 15:24:37.727058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.783 [2024-07-25 15:24:37.727065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.783 [2024-07-25 15:24:37.730625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.783 [2024-07-25 15:24:37.739629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.783 [2024-07-25 15:24:37.740428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.783 [2024-07-25 15:24:37.740466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.783 [2024-07-25 15:24:37.740477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.783 [2024-07-25 15:24:37.740717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.783 [2024-07-25 15:24:37.740940] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.783 [2024-07-25 15:24:37.740951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.783 [2024-07-25 15:24:37.740958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.783 [2024-07-25 15:24:37.744515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.783 [2024-07-25 15:24:37.753517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.783 [2024-07-25 15:24:37.754247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.783 [2024-07-25 15:24:37.754273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.783 [2024-07-25 15:24:37.754286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.783 [2024-07-25 15:24:37.754511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.783 [2024-07-25 15:24:37.754731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.783 [2024-07-25 15:24:37.754740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.783 [2024-07-25 15:24:37.754747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.783 [2024-07-25 15:24:37.758298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.767505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.768306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.768344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.768355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.768595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.768818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.784 [2024-07-25 15:24:37.768828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.784 [2024-07-25 15:24:37.768836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.784 [2024-07-25 15:24:37.772403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.781414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.782241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.782279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.782292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.782532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.782755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.784 [2024-07-25 15:24:37.782764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.784 [2024-07-25 15:24:37.782772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.784 [2024-07-25 15:24:37.786328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.795329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.796142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.796181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.796193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.796441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.796666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.784 [2024-07-25 15:24:37.796679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.784 [2024-07-25 15:24:37.796687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.784 [2024-07-25 15:24:37.800240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.809239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.810017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.810055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.810065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.810312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.810536] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.784 [2024-07-25 15:24:37.810546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.784 [2024-07-25 15:24:37.810554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.784 [2024-07-25 15:24:37.814103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.823107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.823833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.823852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.823860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.824080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.824306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.784 [2024-07-25 15:24:37.824316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.784 [2024-07-25 15:24:37.824323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.784 [2024-07-25 15:24:37.827865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.837065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.837619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.837637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.837645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.837865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.838085] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.784 [2024-07-25 15:24:37.838095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.784 [2024-07-25 15:24:37.838102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.784 [2024-07-25 15:24:37.841740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.850954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.851754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.851792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.851803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.852043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.852273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.784 [2024-07-25 15:24:37.852283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.784 [2024-07-25 15:24:37.852291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.784 [2024-07-25 15:24:37.855841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.864841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.865649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.865687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.865698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.865938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.866160] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.784 [2024-07-25 15:24:37.866169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.784 [2024-07-25 15:24:37.866177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.784 [2024-07-25 15:24:37.869738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.878746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.879568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.879606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.879617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.879856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.880080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.784 [2024-07-25 15:24:37.880089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.784 [2024-07-25 15:24:37.880097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.784 [2024-07-25 15:24:37.883658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.784 [2024-07-25 15:24:37.892655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.784 [2024-07-25 15:24:37.893454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.784 [2024-07-25 15:24:37.893492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.784 [2024-07-25 15:24:37.893507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.784 [2024-07-25 15:24:37.893747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.784 [2024-07-25 15:24:37.893970] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.785 [2024-07-25 15:24:37.893980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.785 [2024-07-25 15:24:37.893988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.785 [2024-07-25 15:24:37.897546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.785 [2024-07-25 15:24:37.906545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.785 [2024-07-25 15:24:37.907191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.785 [2024-07-25 15:24:37.907216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.785 [2024-07-25 15:24:37.907225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.785 [2024-07-25 15:24:37.907446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.785 [2024-07-25 15:24:37.907667] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.785 [2024-07-25 15:24:37.907676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.785 [2024-07-25 15:24:37.907683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.785 [2024-07-25 15:24:37.911229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.785 [2024-07-25 15:24:37.920439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.785 [2024-07-25 15:24:37.921146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.785 [2024-07-25 15:24:37.921162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.785 [2024-07-25 15:24:37.921170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.785 [2024-07-25 15:24:37.921394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.785 [2024-07-25 15:24:37.921615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.785 [2024-07-25 15:24:37.921624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.785 [2024-07-25 15:24:37.921632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.785 [2024-07-25 15:24:37.925172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.785 [2024-07-25 15:24:37.934374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.785 [2024-07-25 15:24:37.935145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.785 [2024-07-25 15:24:37.935183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.785 [2024-07-25 15:24:37.935194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.785 [2024-07-25 15:24:37.935442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.785 [2024-07-25 15:24:37.935666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.785 [2024-07-25 15:24:37.935675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.785 [2024-07-25 15:24:37.935687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.785 [2024-07-25 15:24:37.939241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.785 [2024-07-25 15:24:37.948244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.785 [2024-07-25 15:24:37.949065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.785 [2024-07-25 15:24:37.949103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.785 [2024-07-25 15:24:37.949114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.785 [2024-07-25 15:24:37.949361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.785 [2024-07-25 15:24:37.949584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.785 [2024-07-25 15:24:37.949594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.785 [2024-07-25 15:24:37.949602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.785 [2024-07-25 15:24:37.953154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.785 [2024-07-25 15:24:37.962153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.785 [2024-07-25 15:24:37.962972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.785 [2024-07-25 15:24:37.963011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:45.785 [2024-07-25 15:24:37.963021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:45.785 [2024-07-25 15:24:37.963269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:45.785 [2024-07-25 15:24:37.963493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.785 [2024-07-25 15:24:37.963502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.785 [2024-07-25 15:24:37.963510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.785 [2024-07-25 15:24:37.967062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.048 [2024-07-25 15:24:37.976071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.048 [2024-07-25 15:24:37.976653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 15:24:37.976673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.048 [2024-07-25 15:24:37.976681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.048 [2024-07-25 15:24:37.976902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.048 [2024-07-25 15:24:37.977122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.048 [2024-07-25 15:24:37.977131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.048 [2024-07-25 15:24:37.977139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.048 [2024-07-25 15:24:37.980877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.048 [2024-07-25 15:24:37.989873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.048 [2024-07-25 15:24:37.990438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 15:24:37.990456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.048 [2024-07-25 15:24:37.990465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.048 [2024-07-25 15:24:37.990685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.048 [2024-07-25 15:24:37.990905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.048 [2024-07-25 15:24:37.990914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.048 [2024-07-25 15:24:37.990921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.048 [2024-07-25 15:24:37.994469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.048 [2024-07-25 15:24:38.003671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.048 [2024-07-25 15:24:38.004448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 15:24:38.004486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.048 [2024-07-25 15:24:38.004497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.048 [2024-07-25 15:24:38.004736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.048 [2024-07-25 15:24:38.004959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.048 [2024-07-25 15:24:38.004969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.048 [2024-07-25 15:24:38.004977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.048 [2024-07-25 15:24:38.008532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.048 [2024-07-25 15:24:38.017538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.048 [2024-07-25 15:24:38.018223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 15:24:38.018242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.048 [2024-07-25 15:24:38.018250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.048 [2024-07-25 15:24:38.018470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.048 [2024-07-25 15:24:38.018690] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.048 [2024-07-25 15:24:38.018699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.048 [2024-07-25 15:24:38.018706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.048 [2024-07-25 15:24:38.022255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.048 [2024-07-25 15:24:38.031458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.048 [2024-07-25 15:24:38.032169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 15:24:38.032186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.048 [2024-07-25 15:24:38.032193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.048 [2024-07-25 15:24:38.032422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.048 [2024-07-25 15:24:38.032642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.048 [2024-07-25 15:24:38.032651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.048 [2024-07-25 15:24:38.032658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.048 [2024-07-25 15:24:38.036204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.048 [2024-07-25 15:24:38.045405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.048 [2024-07-25 15:24:38.046065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 15:24:38.046081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.048 [2024-07-25 15:24:38.046088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.048 [2024-07-25 15:24:38.046312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.048 [2024-07-25 15:24:38.046533] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.048 [2024-07-25 15:24:38.046541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.048 [2024-07-25 15:24:38.046548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.048 [2024-07-25 15:24:38.050098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.048 [2024-07-25 15:24:38.059296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.048 [2024-07-25 15:24:38.060074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 15:24:38.060112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.048 [2024-07-25 15:24:38.060122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.048 [2024-07-25 15:24:38.060370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.048 [2024-07-25 15:24:38.060594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.048 [2024-07-25 15:24:38.060604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.048 [2024-07-25 15:24:38.060612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.048 [2024-07-25 15:24:38.064160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.049 [2024-07-25 15:24:38.073171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.049 [2024-07-25 15:24:38.073989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 15:24:38.074027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.049 [2024-07-25 15:24:38.074038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.049 [2024-07-25 15:24:38.074287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.049 [2024-07-25 15:24:38.074512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.049 [2024-07-25 15:24:38.074521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.049 [2024-07-25 15:24:38.074533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.049 [2024-07-25 15:24:38.078086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.049 [2024-07-25 15:24:38.087082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.049 [2024-07-25 15:24:38.087856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 15:24:38.087894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.049 [2024-07-25 15:24:38.087905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.049 [2024-07-25 15:24:38.088145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.049 [2024-07-25 15:24:38.088378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.049 [2024-07-25 15:24:38.088389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.049 [2024-07-25 15:24:38.088396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.049 [2024-07-25 15:24:38.091946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.049 [2024-07-25 15:24:38.100944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.049 [2024-07-25 15:24:38.101728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 15:24:38.101766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.049 [2024-07-25 15:24:38.101777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.049 [2024-07-25 15:24:38.102016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.049 [2024-07-25 15:24:38.102249] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.049 [2024-07-25 15:24:38.102259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.049 [2024-07-25 15:24:38.102267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.049 [2024-07-25 15:24:38.105817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.049 [2024-07-25 15:24:38.114809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.049 [2024-07-25 15:24:38.115623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 15:24:38.115662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.049 [2024-07-25 15:24:38.115673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.049 [2024-07-25 15:24:38.115913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.049 [2024-07-25 15:24:38.116136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.049 [2024-07-25 15:24:38.116146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.049 [2024-07-25 15:24:38.116154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.049 [2024-07-25 15:24:38.119727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.049 [2024-07-25 15:24:38.128720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.049 [2024-07-25 15:24:38.129398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 15:24:38.129422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.049 [2024-07-25 15:24:38.129430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.049 [2024-07-25 15:24:38.129651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.049 [2024-07-25 15:24:38.129871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.049 [2024-07-25 15:24:38.129880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.049 [2024-07-25 15:24:38.129887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.049 [2024-07-25 15:24:38.133438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.049 [2024-07-25 15:24:38.142631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.049 [2024-07-25 15:24:38.143432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 15:24:38.143470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.049 [2024-07-25 15:24:38.143481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.049 [2024-07-25 15:24:38.143721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.049 [2024-07-25 15:24:38.143944] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.049 [2024-07-25 15:24:38.143953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.049 [2024-07-25 15:24:38.143961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.049 [2024-07-25 15:24:38.147527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.049 [2024-07-25 15:24:38.156543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.049 [2024-07-25 15:24:38.157303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 15:24:38.157341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.049 [2024-07-25 15:24:38.157354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.049 [2024-07-25 15:24:38.157595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.049 [2024-07-25 15:24:38.157818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.049 [2024-07-25 15:24:38.157828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.049 [2024-07-25 15:24:38.157836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.049 [2024-07-25 15:24:38.161396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.049 [2024-07-25 15:24:38.170396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.049 [2024-07-25 15:24:38.171189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 15:24:38.171242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.050 [2024-07-25 15:24:38.171253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.050 [2024-07-25 15:24:38.171493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.050 [2024-07-25 15:24:38.171720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.050 [2024-07-25 15:24:38.171730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.050 [2024-07-25 15:24:38.171738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.050 [2024-07-25 15:24:38.175292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.050 [2024-07-25 15:24:38.184286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.050 [2024-07-25 15:24:38.185045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 15:24:38.185083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.050 [2024-07-25 15:24:38.185094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.050 [2024-07-25 15:24:38.185343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.050 [2024-07-25 15:24:38.185567] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.050 [2024-07-25 15:24:38.185576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.050 [2024-07-25 15:24:38.185584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.050 [2024-07-25 15:24:38.189135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.050 [2024-07-25 15:24:38.198137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.050 [2024-07-25 15:24:38.198956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 15:24:38.198994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.050 [2024-07-25 15:24:38.199005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.050 [2024-07-25 15:24:38.199252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.050 [2024-07-25 15:24:38.199476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.050 [2024-07-25 15:24:38.199486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.050 [2024-07-25 15:24:38.199494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.050 [2024-07-25 15:24:38.203042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.050 [2024-07-25 15:24:38.212043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.050 [2024-07-25 15:24:38.212858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 15:24:38.212896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.050 [2024-07-25 15:24:38.212907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.050 [2024-07-25 15:24:38.213146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.050 [2024-07-25 15:24:38.213378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.050 [2024-07-25 15:24:38.213387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.050 [2024-07-25 15:24:38.213395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.050 [2024-07-25 15:24:38.216950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.050 [2024-07-25 15:24:38.225967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.050 [2024-07-25 15:24:38.226626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 15:24:38.226664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.050 [2024-07-25 15:24:38.226675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.050 [2024-07-25 15:24:38.226914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.050 [2024-07-25 15:24:38.227137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.050 [2024-07-25 15:24:38.227147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.050 [2024-07-25 15:24:38.227155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 428014 Killed "${NVMF_APP[@]}" "$@" 00:28:46.050 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:46.050 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:46.050 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:46.050 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.050 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.050 [2024-07-25 15:24:38.230716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.050 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=429716 00:28:46.050 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 429716 00:28:46.050 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:46.312 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 429716 ']' 00:28:46.312 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.312 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.313 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.313 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.313 15:24:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.313 [2024-07-25 15:24:38.239931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.313 [2024-07-25 15:24:38.240624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.313 [2024-07-25 15:24:38.240646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.313 [2024-07-25 15:24:38.240654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.313 [2024-07-25 15:24:38.240875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.313 [2024-07-25 15:24:38.241100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.313 [2024-07-25 15:24:38.241110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.313 [2024-07-25 15:24:38.241117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.313 [2024-07-25 15:24:38.244683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.313 [2024-07-25 15:24:38.253898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.313 [2024-07-25 15:24:38.254590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.313 [2024-07-25 15:24:38.254606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.313 [2024-07-25 15:24:38.254614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.313 [2024-07-25 15:24:38.254834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.313 [2024-07-25 15:24:38.255053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.313 [2024-07-25 15:24:38.255062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.313 [2024-07-25 15:24:38.255069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.313 [2024-07-25 15:24:38.258628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.313 [2024-07-25 15:24:38.267842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.313 [2024-07-25 15:24:38.268568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.313 [2024-07-25 15:24:38.268584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.313 [2024-07-25 15:24:38.268592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.313 [2024-07-25 15:24:38.268811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.313 [2024-07-25 15:24:38.269030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.313 [2024-07-25 15:24:38.269039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.313 [2024-07-25 15:24:38.269047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.313 [2024-07-25 15:24:38.272612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.313 [2024-07-25 15:24:38.281821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.313 [2024-07-25 15:24:38.282599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.313 [2024-07-25 15:24:38.282637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.313 [2024-07-25 15:24:38.282648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.313 [2024-07-25 15:24:38.282887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.313 [2024-07-25 15:24:38.283111] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.313 [2024-07-25 15:24:38.283120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.313 [2024-07-25 15:24:38.283128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.313 [2024-07-25 15:24:38.286685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.313 [2024-07-25 15:24:38.287812] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:46.313 [2024-07-25 15:24:38.287857] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.313 [2024-07-25 15:24:38.295682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.313 [2024-07-25 15:24:38.296494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.313 [2024-07-25 15:24:38.296533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.313 [2024-07-25 15:24:38.296543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.313 [2024-07-25 15:24:38.296783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.313 [2024-07-25 15:24:38.297006] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.313 [2024-07-25 15:24:38.297015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.313 [2024-07-25 15:24:38.297023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.313 [2024-07-25 15:24:38.300579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.313 [2024-07-25 15:24:38.309576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.313 [2024-07-25 15:24:38.310415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.313 [2024-07-25 15:24:38.310453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.313 [2024-07-25 15:24:38.310463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.313 [2024-07-25 15:24:38.310703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.313 [2024-07-25 15:24:38.310925] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.313 [2024-07-25 15:24:38.310935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.313 [2024-07-25 15:24:38.310943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.313 [2024-07-25 15:24:38.314500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.313 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.313 [2024-07-25 15:24:38.323504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.313 [2024-07-25 15:24:38.324293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.313 [2024-07-25 15:24:38.324332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.313 [2024-07-25 15:24:38.324344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.313 [2024-07-25 15:24:38.324586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.313 [2024-07-25 15:24:38.324808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.313 [2024-07-25 15:24:38.324818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.313 [2024-07-25 15:24:38.324825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.313 [2024-07-25 15:24:38.328386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.313 [2024-07-25 15:24:38.337385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.313 [2024-07-25 15:24:38.338164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.313 [2024-07-25 15:24:38.338209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.313 [2024-07-25 15:24:38.338225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.313 [2024-07-25 15:24:38.338465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.313 [2024-07-25 15:24:38.338689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.313 [2024-07-25 15:24:38.338698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.313 [2024-07-25 15:24:38.338706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.313 [2024-07-25 15:24:38.342259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.313 [2024-07-25 15:24:38.351263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.313 [2024-07-25 15:24:38.352088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.313 [2024-07-25 15:24:38.352126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.313 [2024-07-25 15:24:38.352137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.313 [2024-07-25 15:24:38.352385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.313 [2024-07-25 15:24:38.352609] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.313 [2024-07-25 15:24:38.352618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.313 [2024-07-25 15:24:38.352626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.313 [2024-07-25 15:24:38.356175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.314 [2024-07-25 15:24:38.365249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.314 [2024-07-25 15:24:38.366034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.314 [2024-07-25 15:24:38.366072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.314 [2024-07-25 15:24:38.366083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.314 [2024-07-25 15:24:38.366331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.314 [2024-07-25 15:24:38.366555] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.314 [2024-07-25 15:24:38.366565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.314 [2024-07-25 15:24:38.366574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.314 [2024-07-25 15:24:38.369485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:46.314 [2024-07-25 15:24:38.370122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.314 [2024-07-25 15:24:38.379136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.314 [2024-07-25 15:24:38.379993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.314 [2024-07-25 15:24:38.380031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.314 [2024-07-25 15:24:38.380043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.314 [2024-07-25 15:24:38.380293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.314 [2024-07-25 15:24:38.380521] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.314 [2024-07-25 15:24:38.380532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.314 [2024-07-25 15:24:38.380540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.314 [2024-07-25 15:24:38.384089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.314 [2024-07-25 15:24:38.393092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.314 [2024-07-25 15:24:38.393823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.314 [2024-07-25 15:24:38.393843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.314 [2024-07-25 15:24:38.393851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.314 [2024-07-25 15:24:38.394071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.314 [2024-07-25 15:24:38.394296] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.314 [2024-07-25 15:24:38.394305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.314 [2024-07-25 15:24:38.394313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.314 [2024-07-25 15:24:38.397855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.314 [2024-07-25 15:24:38.407058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.314 [2024-07-25 15:24:38.407768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.314 [2024-07-25 15:24:38.407785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.314 [2024-07-25 15:24:38.407793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.314 [2024-07-25 15:24:38.408013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.314 [2024-07-25 15:24:38.408237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.314 [2024-07-25 15:24:38.408246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.314 [2024-07-25 15:24:38.408253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.314 [2024-07-25 15:24:38.411796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.314 [2024-07-25 15:24:38.421006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.314 [2024-07-25 15:24:38.421699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.314 [2024-07-25 15:24:38.421716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.314 [2024-07-25 15:24:38.421724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.314 [2024-07-25 15:24:38.421944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.314 [2024-07-25 15:24:38.422164] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.314 [2024-07-25 15:24:38.422173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.314 [2024-07-25 15:24:38.422180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.314 [2024-07-25 15:24:38.422855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.314 [2024-07-25 15:24:38.422880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.314 [2024-07-25 15:24:38.422887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.314 [2024-07-25 15:24:38.422893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.314 [2024-07-25 15:24:38.422897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.314 [2024-07-25 15:24:38.423002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.314 [2024-07-25 15:24:38.423165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.314 [2024-07-25 15:24:38.423167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.314 [2024-07-25 15:24:38.425734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.314 [2024-07-25 15:24:38.434935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.314 [2024-07-25 15:24:38.435641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.314 [2024-07-25 15:24:38.435659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.314 [2024-07-25 15:24:38.435666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.314 [2024-07-25 15:24:38.435888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.314 [2024-07-25 15:24:38.436107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.314 [2024-07-25 15:24:38.436117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.314 [2024-07-25 15:24:38.436124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.314 [2024-07-25 15:24:38.439668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.314 [2024-07-25 15:24:38.448864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.314 [2024-07-25 15:24:38.449671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.314 [2024-07-25 15:24:38.449712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.314 [2024-07-25 15:24:38.449723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.314 [2024-07-25 15:24:38.449966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.314 [2024-07-25 15:24:38.450189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.314 [2024-07-25 15:24:38.450199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.314 [2024-07-25 15:24:38.450213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.314 [2024-07-25 15:24:38.453764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.314 [2024-07-25 15:24:38.462761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.314 [2024-07-25 15:24:38.463539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.314 [2024-07-25 15:24:38.463579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.314 [2024-07-25 15:24:38.463590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.314 [2024-07-25 15:24:38.463831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.314 [2024-07-25 15:24:38.464060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.314 [2024-07-25 15:24:38.464070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.314 [2024-07-25 15:24:38.464077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.314 [2024-07-25 15:24:38.467636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.314 [2024-07-25 15:24:38.476645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.314 [2024-07-25 15:24:38.477425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.314 [2024-07-25 15:24:38.477464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.314 [2024-07-25 15:24:38.477475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.314 [2024-07-25 15:24:38.477714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.314 [2024-07-25 15:24:38.477939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.314 [2024-07-25 15:24:38.477949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.314 [2024-07-25 15:24:38.477956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.315 [2024-07-25 15:24:38.481513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.315 [2024-07-25 15:24:38.490504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.315 [2024-07-25 15:24:38.491208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.315 [2024-07-25 15:24:38.491246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.315 [2024-07-25 15:24:38.491259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.315 [2024-07-25 15:24:38.491500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.315 [2024-07-25 15:24:38.491723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.315 [2024-07-25 15:24:38.491732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.315 [2024-07-25 15:24:38.491740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.315 [2024-07-25 15:24:38.495294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.577 [2024-07-25 15:24:38.504496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.577 [2024-07-25 15:24:38.505107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.577 [2024-07-25 15:24:38.505126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.577 [2024-07-25 15:24:38.505134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.577 [2024-07-25 15:24:38.505360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.577 [2024-07-25 15:24:38.505580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.577 [2024-07-25 15:24:38.505590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.577 [2024-07-25 15:24:38.505597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.577 [2024-07-25 15:24:38.509146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.577 [2024-07-25 15:24:38.518346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.577 [2024-07-25 15:24:38.519077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.577 [2024-07-25 15:24:38.519094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.577 [2024-07-25 15:24:38.519101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.577 [2024-07-25 15:24:38.519325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.577 [2024-07-25 15:24:38.519545] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.577 [2024-07-25 15:24:38.519555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.519563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.523099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.532294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.533015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.533032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.533040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.578 [2024-07-25 15:24:38.533263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.578 [2024-07-25 15:24:38.533483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.578 [2024-07-25 15:24:38.533492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.533500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.537040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.546263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.546936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.546953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.546960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.578 [2024-07-25 15:24:38.547180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.578 [2024-07-25 15:24:38.547403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.578 [2024-07-25 15:24:38.547413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.547420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.550963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.560157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.560953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.560991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.561010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.578 [2024-07-25 15:24:38.561258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.578 [2024-07-25 15:24:38.561482] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.578 [2024-07-25 15:24:38.561491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.561499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.565046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.574052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.574886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.574925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.574936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.578 [2024-07-25 15:24:38.575175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.578 [2024-07-25 15:24:38.575406] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.578 [2024-07-25 15:24:38.575416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.575424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.578971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.587960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.588748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.588786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.588797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.578 [2024-07-25 15:24:38.589037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.578 [2024-07-25 15:24:38.589268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.578 [2024-07-25 15:24:38.589278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.589286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.592835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.601827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.602401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.602421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.602429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.578 [2024-07-25 15:24:38.602650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.578 [2024-07-25 15:24:38.602870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.578 [2024-07-25 15:24:38.602883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.602891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.606437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.615635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.616431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.616470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.616481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.578 [2024-07-25 15:24:38.616720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.578 [2024-07-25 15:24:38.616943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.578 [2024-07-25 15:24:38.616952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.616960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.620527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.629527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.630421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.630459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.630470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.578 [2024-07-25 15:24:38.630710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.578 [2024-07-25 15:24:38.630933] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.578 [2024-07-25 15:24:38.630942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.630949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.634505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.643501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.644290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.644329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.644341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.578 [2024-07-25 15:24:38.644582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.578 [2024-07-25 15:24:38.644805] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.578 [2024-07-25 15:24:38.644815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.578 [2024-07-25 15:24:38.644822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.578 [2024-07-25 15:24:38.648382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.578 [2024-07-25 15:24:38.657390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.578 [2024-07-25 15:24:38.658126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.578 [2024-07-25 15:24:38.658146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.578 [2024-07-25 15:24:38.658154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.579 [2024-07-25 15:24:38.658379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.579 [2024-07-25 15:24:38.658599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.579 [2024-07-25 15:24:38.658608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.579 [2024-07-25 15:24:38.658615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.579 [2024-07-25 15:24:38.662159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.579 [2024-07-25 15:24:38.671379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.579 [2024-07-25 15:24:38.672055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.579 [2024-07-25 15:24:38.672093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.579 [2024-07-25 15:24:38.672104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.579 [2024-07-25 15:24:38.672353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.579 [2024-07-25 15:24:38.672577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.579 [2024-07-25 15:24:38.672587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.579 [2024-07-25 15:24:38.672594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.579 [2024-07-25 15:24:38.676145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.579 [2024-07-25 15:24:38.685350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.579 [2024-07-25 15:24:38.686085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.579 [2024-07-25 15:24:38.686104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.579 [2024-07-25 15:24:38.686113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.579 [2024-07-25 15:24:38.686337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.579 [2024-07-25 15:24:38.686558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.579 [2024-07-25 15:24:38.686566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.579 [2024-07-25 15:24:38.686574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.579 [2024-07-25 15:24:38.690120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.579 [2024-07-25 15:24:38.699323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.579 [2024-07-25 15:24:38.700143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.579 [2024-07-25 15:24:38.700182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.579 [2024-07-25 15:24:38.700193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.579 [2024-07-25 15:24:38.700447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.579 [2024-07-25 15:24:38.700671] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.579 [2024-07-25 15:24:38.700680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.579 [2024-07-25 15:24:38.700688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.579 [2024-07-25 15:24:38.704241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.579 [2024-07-25 15:24:38.713237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.579 [2024-07-25 15:24:38.714046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.579 [2024-07-25 15:24:38.714084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.579 [2024-07-25 15:24:38.714095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.579 [2024-07-25 15:24:38.714341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.579 [2024-07-25 15:24:38.714566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.579 [2024-07-25 15:24:38.714575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.579 [2024-07-25 15:24:38.714583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.579 [2024-07-25 15:24:38.718131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.579 [2024-07-25 15:24:38.727131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.579 [2024-07-25 15:24:38.727868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.579 [2024-07-25 15:24:38.727888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.579 [2024-07-25 15:24:38.727896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.579 [2024-07-25 15:24:38.728116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.579 [2024-07-25 15:24:38.728341] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.579 [2024-07-25 15:24:38.728350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.579 [2024-07-25 15:24:38.728357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.579 [2024-07-25 15:24:38.731900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.579 [2024-07-25 15:24:38.741097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.579 [2024-07-25 15:24:38.741715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.579 [2024-07-25 15:24:38.741753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.579 [2024-07-25 15:24:38.741764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.579 [2024-07-25 15:24:38.742003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.579 [2024-07-25 15:24:38.742237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.579 [2024-07-25 15:24:38.742249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.579 [2024-07-25 15:24:38.742261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.579 [2024-07-25 15:24:38.745813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.579 [2024-07-25 15:24:38.755016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.579 [2024-07-25 15:24:38.755559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.579 [2024-07-25 15:24:38.755598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.579 [2024-07-25 15:24:38.755609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.579 [2024-07-25 15:24:38.755848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.579 [2024-07-25 15:24:38.756071] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.579 [2024-07-25 15:24:38.756081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.579 [2024-07-25 15:24:38.756088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.579 [2024-07-25 15:24:38.759647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.842 [2024-07-25 15:24:38.768854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.842 [2024-07-25 15:24:38.769649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-07-25 15:24:38.769687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.842 [2024-07-25 15:24:38.769698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.842 [2024-07-25 15:24:38.769938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.842 [2024-07-25 15:24:38.770160] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.842 [2024-07-25 15:24:38.770170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.842 [2024-07-25 15:24:38.770178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.842 [2024-07-25 15:24:38.773747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.842 [2024-07-25 15:24:38.782744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.842 [2024-07-25 15:24:38.783414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.842 [2024-07-25 15:24:38.783452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.842 [2024-07-25 15:24:38.783463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.842 [2024-07-25 15:24:38.783703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.842 [2024-07-25 15:24:38.783926] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.842 [2024-07-25 15:24:38.783935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.842 [2024-07-25 15:24:38.783943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.787499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.796704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.797540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.797578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.797589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.797829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.843 [2024-07-25 15:24:38.798052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.843 [2024-07-25 15:24:38.798061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.843 [2024-07-25 15:24:38.798069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.801661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.810663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.811465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.811503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.811515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.811754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.843 [2024-07-25 15:24:38.811977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.843 [2024-07-25 15:24:38.811986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.843 [2024-07-25 15:24:38.811994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.815550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.824551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.825044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.825063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.825071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.825297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.843 [2024-07-25 15:24:38.825518] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.843 [2024-07-25 15:24:38.825526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.843 [2024-07-25 15:24:38.825534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.829075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.838486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.839311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.839349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.839360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.839603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.843 [2024-07-25 15:24:38.839827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.843 [2024-07-25 15:24:38.839836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.843 [2024-07-25 15:24:38.839844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.843402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.852396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.853267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.853304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.853317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.853558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.843 [2024-07-25 15:24:38.853781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.843 [2024-07-25 15:24:38.853790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.843 [2024-07-25 15:24:38.853798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.857357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.866354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.867062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.867100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.867110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.867358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.843 [2024-07-25 15:24:38.867582] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.843 [2024-07-25 15:24:38.867592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.843 [2024-07-25 15:24:38.867600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.871147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.880151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.880972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.881011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.881021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.881269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.843 [2024-07-25 15:24:38.881493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.843 [2024-07-25 15:24:38.881502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.843 [2024-07-25 15:24:38.881514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.885064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.894062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.894515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.894535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.894543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.894763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.843 [2024-07-25 15:24:38.894983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.843 [2024-07-25 15:24:38.894991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.843 [2024-07-25 15:24:38.894998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.898547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.907956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.908634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.908672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.908683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.908922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.843 [2024-07-25 15:24:38.909146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.843 [2024-07-25 15:24:38.909156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.843 [2024-07-25 15:24:38.909164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.843 [2024-07-25 15:24:38.912726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.843 [2024-07-25 15:24:38.921943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.843 [2024-07-25 15:24:38.922785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.843 [2024-07-25 15:24:38.922823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.843 [2024-07-25 15:24:38.922833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.843 [2024-07-25 15:24:38.923072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.844 [2024-07-25 15:24:38.923304] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.844 [2024-07-25 15:24:38.923315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.844 [2024-07-25 15:24:38.923323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.844 [2024-07-25 15:24:38.926872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.844 [2024-07-25 15:24:38.935868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.844 [2024-07-25 15:24:38.936648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-07-25 15:24:38.936690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.844 [2024-07-25 15:24:38.936701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.844 [2024-07-25 15:24:38.936940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.844 [2024-07-25 15:24:38.937163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.844 [2024-07-25 15:24:38.937173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.844 [2024-07-25 15:24:38.937181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.844 [2024-07-25 15:24:38.940739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.844 [2024-07-25 15:24:38.949733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.844 [2024-07-25 15:24:38.950551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-07-25 15:24:38.950589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.844 [2024-07-25 15:24:38.950600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.844 [2024-07-25 15:24:38.950840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.844 [2024-07-25 15:24:38.951063] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.844 [2024-07-25 15:24:38.951073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.844 [2024-07-25 15:24:38.951081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.844 [2024-07-25 15:24:38.954642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.844 [2024-07-25 15:24:38.963648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.844 [2024-07-25 15:24:38.964477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-07-25 15:24:38.964516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.844 [2024-07-25 15:24:38.964527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.844 [2024-07-25 15:24:38.964766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.844 [2024-07-25 15:24:38.964989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.844 [2024-07-25 15:24:38.964999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.844 [2024-07-25 15:24:38.965006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.844 [2024-07-25 15:24:38.968562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.844 [2024-07-25 15:24:38.977780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.844 [2024-07-25 15:24:38.978447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-07-25 15:24:38.978485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.844 [2024-07-25 15:24:38.978496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.844 [2024-07-25 15:24:38.978736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.844 [2024-07-25 15:24:38.978963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.844 [2024-07-25 15:24:38.978973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.844 [2024-07-25 15:24:38.978981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.844 [2024-07-25 15:24:38.982539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.844 [2024-07-25 15:24:38.991750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.844 [2024-07-25 15:24:38.992554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-07-25 15:24:38.992592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.844 [2024-07-25 15:24:38.992604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.844 [2024-07-25 15:24:38.992843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.844 [2024-07-25 15:24:38.993066] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.844 [2024-07-25 15:24:38.993077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.844 [2024-07-25 15:24:38.993085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.844 [2024-07-25 15:24:38.996644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.844 [2024-07-25 15:24:39.005646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.844 [2024-07-25 15:24:39.006452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-07-25 15:24:39.006491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.844 [2024-07-25 15:24:39.006501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.844 [2024-07-25 15:24:39.006741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.844 [2024-07-25 15:24:39.006965] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.844 [2024-07-25 15:24:39.006974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.844 [2024-07-25 15:24:39.006983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.844 [2024-07-25 15:24:39.010539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.844 [2024-07-25 15:24:39.019538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.844 [2024-07-25 15:24:39.020397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.844 [2024-07-25 15:24:39.020436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:46.844 [2024-07-25 15:24:39.020447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:46.844 [2024-07-25 15:24:39.020687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:46.844 [2024-07-25 15:24:39.020918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.844 [2024-07-25 15:24:39.020929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.844 [2024-07-25 15:24:39.020937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.844 [2024-07-25 15:24:39.024500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.107 [2024-07-25 15:24:39.033499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.107 [2024-07-25 15:24:39.034249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.107 [2024-07-25 15:24:39.034275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.107 [2024-07-25 15:24:39.034283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.107 [2024-07-25 15:24:39.034508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.107 [2024-07-25 15:24:39.034729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.107 [2024-07-25 15:24:39.034737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.107 [2024-07-25 15:24:39.034744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.107 [2024-07-25 15:24:39.038296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.107 [2024-07-25 15:24:39.047496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.107 [2024-07-25 15:24:39.048253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.107 [2024-07-25 15:24:39.048292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.107 [2024-07-25 15:24:39.048303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.107 [2024-07-25 15:24:39.048542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.107 [2024-07-25 15:24:39.048765] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.107 [2024-07-25 15:24:39.048775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.107 [2024-07-25 15:24:39.048782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.107 [2024-07-25 15:24:39.052342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.107 [2024-07-25 15:24:39.061338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.107 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:47.107 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:47.107 [2024-07-25 15:24:39.061907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.107 [2024-07-25 15:24:39.061945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.107 [2024-07-25 15:24:39.061956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.107 [2024-07-25 15:24:39.062195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.107 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:47.107 [2024-07-25 15:24:39.062429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.107 [2024-07-25 15:24:39.062439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.107 [2024-07-25 15:24:39.062446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.107 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:47.107 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.107 [2024-07-25 15:24:39.065998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.107 [2024-07-25 15:24:39.075220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.107 [2024-07-25 15:24:39.075928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.107 [2024-07-25 15:24:39.075967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.107 [2024-07-25 15:24:39.075978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.107 [2024-07-25 15:24:39.076226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.107 [2024-07-25 15:24:39.076450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.107 [2024-07-25 15:24:39.076460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.107 [2024-07-25 15:24:39.076468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.107 [2024-07-25 15:24:39.080017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.107 [2024-07-25 15:24:39.089017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.107 [2024-07-25 15:24:39.089748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.107 [2024-07-25 15:24:39.089787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.107 [2024-07-25 15:24:39.089799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.107 [2024-07-25 15:24:39.090038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.107 [2024-07-25 15:24:39.090270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.107 [2024-07-25 15:24:39.090280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.107 [2024-07-25 15:24:39.090289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.107 [2024-07-25 15:24:39.093843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.107 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.107 [2024-07-25 15:24:39.102842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.107 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:47.107 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.107 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.107 [2024-07-25 15:24:39.103676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.107 [2024-07-25 15:24:39.103715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.107 [2024-07-25 15:24:39.103726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.107 [2024-07-25 15:24:39.103965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.107 [2024-07-25 15:24:39.104189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.107 [2024-07-25 15:24:39.104199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.107 [2024-07-25 15:24:39.104217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.107 [2024-07-25 15:24:39.107772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.107 [2024-07-25 15:24:39.109508] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.107 [2024-07-25 15:24:39.116768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.107 [2024-07-25 15:24:39.117563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.107 [2024-07-25 15:24:39.117601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.107 [2024-07-25 15:24:39.117612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.107 [2024-07-25 15:24:39.117851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.107 [2024-07-25 15:24:39.118074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.107 [2024-07-25 15:24:39.118083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.107 [2024-07-25 15:24:39.118091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.108 [2024-07-25 15:24:39.121660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.108 [2024-07-25 15:24:39.130661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.108 [2024-07-25 15:24:39.131508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.108 [2024-07-25 15:24:39.131546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.108 [2024-07-25 15:24:39.131557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.108 [2024-07-25 15:24:39.131797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.108 [2024-07-25 15:24:39.132020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.108 [2024-07-25 15:24:39.132030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.108 [2024-07-25 15:24:39.132038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.108 [2024-07-25 15:24:39.135598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.108 Malloc0 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.108 [2024-07-25 15:24:39.144601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.108 [2024-07-25 15:24:39.145435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.108 [2024-07-25 15:24:39.145473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.108 [2024-07-25 15:24:39.145484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.108 [2024-07-25 15:24:39.145724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.108 [2024-07-25 15:24:39.145951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.108 [2024-07-25 15:24:39.145961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.108 [2024-07-25 15:24:39.145969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.108 [2024-07-25 15:24:39.149524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.108 [2024-07-25 15:24:39.158556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.108 [2024-07-25 15:24:39.159430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.108 [2024-07-25 15:24:39.159468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.108 [2024-07-25 15:24:39.159480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.108 [2024-07-25 15:24:39.159721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.108 [2024-07-25 15:24:39.159944] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.108 [2024-07-25 15:24:39.159954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.108 [2024-07-25 15:24:39.159962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.108 [2024-07-25 15:24:39.163521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.108 [2024-07-25 15:24:39.172528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.108 [2024-07-25 15:24:39.173109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.108 [2024-07-25 15:24:39.173147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd3b0 with addr=10.0.0.2, port=4420 00:28:47.108 [2024-07-25 15:24:39.173160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd3b0 is same with the state(5) to be set 00:28:47.108 [2024-07-25 15:24:39.173409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd3b0 (9): Bad file descriptor 00:28:47.108 [2024-07-25 15:24:39.173633] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.108 [2024-07-25 15:24:39.173643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.108 [2024-07-25 15:24:39.173650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.108 [2024-07-25 15:24:39.173785] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.108 [2024-07-25 15:24:39.177204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.108 15:24:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 428419 00:28:47.108 [2024-07-25 15:24:39.186409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.108 [2024-07-25 15:24:39.269075] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:57.132 00:28:57.132 Latency(us) 00:28:57.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.132 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:57.132 Verification LBA range: start 0x0 length 0x4000 00:28:57.132 Nvme1n1 : 15.00 8724.01 34.08 9757.18 0.00 6900.74 1078.61 14417.92 00:28:57.132 =================================================================================================================== 00:28:57.132 Total : 8724.01 34.08 9757.18 0.00 6900.74 1078.61 14417.92 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.132 rmmod nvme_tcp 00:28:57.132 rmmod nvme_fabrics 00:28:57.132 rmmod nvme_keyring 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 429716 ']' 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 429716 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 429716 ']' 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 429716 00:28:57.132 15:24:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 429716 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 429716' 00:28:57.132 killing process with pid 429716 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 429716 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 429716 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.132 15:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.076 15:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:58.076 00:28:58.076 real 0m27.928s 00:28:58.076 user 1m2.671s 00:28:58.076 sys 0m7.558s 00:28:58.076 15:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.076 15:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.076 ************************************ 00:28:58.076 END TEST nvmf_bdevperf 00:28:58.076 ************************************ 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.338 ************************************ 00:28:58.338 START TEST nvmf_target_disconnect 00:28:58.338 ************************************ 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:58.338 * Looking for test storage... 00:28:58.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:58.338 15:24:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.486 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.486 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:06.486 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:06.486 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:06.486 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:06.487 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:06.487 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:06.487 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:06.487 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:06.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:29:06.487 00:29:06.487 --- 10.0.0.2 ping statistics --- 00:29:06.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.487 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:29:06.487 00:29:06.487 --- 10.0.0.1 ping statistics --- 00:29:06.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.487 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:06.487 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.488 ************************************ 00:29:06.488 START TEST nvmf_target_disconnect_tc1 00:29:06.488 ************************************ 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.488 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.488 [2024-07-25 15:24:57.700235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.488 [2024-07-25 15:24:57.700330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2321e20 with addr=10.0.0.2, port=4420 00:29:06.488 [2024-07-25 15:24:57.700366] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:06.488 [2024-07-25 15:24:57.700383] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:06.488 [2024-07-25 15:24:57.700391] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:06.488 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:06.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:06.488 Initializing NVMe Controllers 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:06.488 00:29:06.488 real 0m0.117s 00:29:06.488 user 0m0.053s 00:29:06.488 sys 0m0.063s 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.488 ************************************ 00:29:06.488 END TEST nvmf_target_disconnect_tc1 00:29:06.488 ************************************ 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.488 ************************************ 00:29:06.488 START TEST nvmf_target_disconnect_tc2 00:29:06.488 ************************************ 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=435762 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 435762 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 435762 ']' 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.488 15:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.488 [2024-07-25 15:24:57.855737] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:06.488 [2024-07-25 15:24:57.855788] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.488 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.488 [2024-07-25 15:24:57.939186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.488 [2024-07-25 15:24:58.032144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.488 [2024-07-25 15:24:58.032232] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.488 [2024-07-25 15:24:58.032240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.488 [2024-07-25 15:24:58.032247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.488 [2024-07-25 15:24:58.032254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.488 [2024-07-25 15:24:58.032417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:06.488 [2024-07-25 15:24:58.032675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:06.488 [2024-07-25 15:24:58.032835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:06.488 [2024-07-25 15:24:58.032836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.488 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.488 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:06.488 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:06.488 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.488 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.750 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.751 Malloc0 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.751 [2024-07-25 15:24:58.727585] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.751 [2024-07-25 15:24:58.768009] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=435789 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:06.751 15:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.751 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.669 15:25:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 435762 00:29:08.669 15:25:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Write completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Write completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Write completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Write completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Write completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Write completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Write completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Write completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Write completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.669 starting I/O failed 00:29:08.669 Read completed with error (sct=0, sc=8) 00:29:08.670 starting I/O failed 00:29:08.670 Read completed with error (sct=0, sc=8) 00:29:08.670 starting I/O failed 00:29:08.670 Write completed with error (sct=0, sc=8) 00:29:08.670 starting I/O failed 00:29:08.670 Read completed with error (sct=0, sc=8) 00:29:08.670 starting I/O failed 00:29:08.670 Write completed with error (sct=0, sc=8) 00:29:08.670 starting I/O failed 00:29:08.670 Write completed with error (sct=0, sc=8) 00:29:08.670 starting I/O failed 00:29:08.670 Write completed with error (sct=0, sc=8) 00:29:08.670 starting I/O failed 00:29:08.670 [2024-07-25 15:25:00.803325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.670 [2024-07-25 15:25:00.803815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.803830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.804452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.804481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.804742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.804751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.805412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.805441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.805922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.805932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.806409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.806438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.806879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.806890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.807457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.807486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.807833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.807844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.808292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.808301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.808826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.808836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.809315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.809324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.809785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.809794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.810280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.810289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.810829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.810838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.811313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.811322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.811775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.811784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.812288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.812297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.812766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.812775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.813249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.813259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.813736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.813745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.814098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.814107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.814446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.814454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.814933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.814944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.815401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.815410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.815756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.815764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.816105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.816113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.816575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.816585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.816835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.816843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.817269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.817277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.817756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.817765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.818236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.818245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.818716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.670 [2024-07-25 15:25:00.818724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.670 qpair failed and we were unable to recover it. 00:29:08.670 [2024-07-25 15:25:00.819197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.819211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.819648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.819658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.820722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.820740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.821234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.821251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.821962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.821980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.822433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.822443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.822874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.822883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.823251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.823259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.823738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.823746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.824150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.824158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.824496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.824504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.824815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.824822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.825063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.825071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.825539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.825548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.825905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.825914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.826390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.826398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.826842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.826850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.827284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.827293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.827787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.827794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.828022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.828035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.828354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.828364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.828723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.828731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.829183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.829191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.829579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.829588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.829867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.829876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.830302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.830311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.830785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.830794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.831254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.831262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.831691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.831699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.832133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.832141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.832619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.832629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.833111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.833119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.833514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.833523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.833988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.833997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.834557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.834585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.835040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.671 [2024-07-25 15:25:00.835049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.671 qpair failed and we were unable to recover it. 00:29:08.671 [2024-07-25 15:25:00.835514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.835543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.836031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.836041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.836481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.836509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.836869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.836880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.837422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.837451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.837789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.837799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.838241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.838250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.838705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.838714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.839157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.839166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.839636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.839646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.840000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.840007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.840441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.840470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.840835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.840845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.841303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.841311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.841767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.841774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.842150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.842159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.842524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.842533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.842909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.842917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.843398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.843406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.843907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.843916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.844335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.844343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.844792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.844801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.845259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.845267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.845601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.845611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.846084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.846092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.846594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.846602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.846983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.846991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.847411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.847420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.847778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.847786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.848226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.848234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.848725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.848733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.849210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.849219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.849630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.849638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.850115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.850123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.850587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.850597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.672 [2024-07-25 15:25:00.851069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.672 [2024-07-25 15:25:00.851077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.672 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.851577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.851585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.852069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.852077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.852644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.852674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.853075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.853084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.853652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.853681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.854123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.854133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.854609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.854618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.854998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.855007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.855551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.855580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.856032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.856042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.856518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.856546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.857030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.857041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.673 [2024-07-25 15:25:00.857594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.673 [2024-07-25 15:25:00.857623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.673 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.858050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.858061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.858621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.858649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.859091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.859101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.859662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.859691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.860159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.860168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.860523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.860552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.860957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.860967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.861448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.861477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.861941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.861951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.862587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.862616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.863074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.863084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.863462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.863471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.863834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.863843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.864282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.864290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.864684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.864693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.865099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.865107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.865581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.865590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.866034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.866043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.866490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.866498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.866941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.866950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.867509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.867538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.868018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.868027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.868258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.868272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.868748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.868756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.869196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.869209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.869666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.869677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.870138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.870146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.870817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.870846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.871417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.871446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.871960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.871970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.872452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.872481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.872943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.872954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.873414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.873443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.873923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.873933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.874568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.874598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.875071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.875081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.875635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.875665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.876074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.876084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.876617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.876625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.877140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.877148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.877693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.877722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.878196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.878217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.878767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.878796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.879273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.879294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.879654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.879663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.880042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.880050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.880608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.880637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.880978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.880988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.881557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.881585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.882035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.882045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.882623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.882652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.883126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.883136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.883495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.883524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.883840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.883852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.884300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.884309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.884780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.884788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.885267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.885274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.885469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.885481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.885933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.885942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.886395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.886403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.940 qpair failed and we were unable to recover it. 00:29:08.940 [2024-07-25 15:25:00.886869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.940 [2024-07-25 15:25:00.886878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.887331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.887340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.887805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.887813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.888315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.888323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.888814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.888822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.889277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.889289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.889751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.889758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.890242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.890252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.890605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.890613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.891100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.891109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.891575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.891583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.892023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.892031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.892493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.892501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.892943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.892951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.893497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.893525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.893994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.894003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.894579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.894609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.895086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.895096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.895576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.895585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.896036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.896044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.896599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.896628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.896948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.896958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.897531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.897560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.898035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.898044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.898623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.898651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.899112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.899122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.899680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.899709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.900167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.900178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.900750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.900779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.901124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.901134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.901573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.901602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.902069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.902080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.902679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.902707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.903167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.903177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.903707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.903736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.904428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.904457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.904905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.904915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.905489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.905518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.905981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.905990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.906538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.906567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.907033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.907043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.907535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.907565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.908022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.908031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.908472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.908500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.908968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.908977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.909477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.909506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.909890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.909901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.910488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.910517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.911003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.911013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.911588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.911617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.912087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.912097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.912474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.912483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.912939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.912948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.913511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.913539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.913908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.913917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.914271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.914280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.914733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.914741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.915232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.915241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.915707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.915715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.916073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.941 [2024-07-25 15:25:00.916081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.941 qpair failed and we were unable to recover it. 00:29:08.941 [2024-07-25 15:25:00.916540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.916548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.917000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.917008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.917556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.917586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.918060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.918070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.918631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.918660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.919158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.919168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.919713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.919742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.920209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.920219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.920805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.920834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.921043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.921054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.921603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.921632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.922137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.922147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.922696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.922728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.923419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.923447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.923916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.923925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.924423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.924452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.924918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.924928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.925528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.925557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.925904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.925914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.926346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.926354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.926806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.926815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.927295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.927303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.927757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.927765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.928268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.928276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.928722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.928730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.929178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.929186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.929653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.929663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.930106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.930114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.930569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.930577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.931033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.931042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.931591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.931620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.932084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.932094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.932566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.932575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.933109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.933117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.933566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.933574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.933972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.933981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.934532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.934561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.934924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.934935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.935570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.935599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.936082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.936092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.936455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.936464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.936921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.936929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.937517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.937545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.938024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.938034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.938504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.938532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.939011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.939021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.939540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.939568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.940028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.940038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.942 [2024-07-25 15:25:00.940607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.942 [2024-07-25 15:25:00.940636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.942 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.941096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.941106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.941672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.941702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.942078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.942088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.942326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.942339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.942717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.942725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.942938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.942947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.943400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.943418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.943873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.943881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.944371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.944379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.944886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.944894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.945345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.945353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.945843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.945852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.946315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.946324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.946643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.946652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.947123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.947131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.947682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.947691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.948138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.948146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.948606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.948614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.949088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.949096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.949572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.949580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.950045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.950052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.950476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.950505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.950962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.950973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.951498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.951528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.952011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.952020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.952603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.952632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.953112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.953121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.953647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.953676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.954191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.954206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.954697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.954706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.955161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.955170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.955720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.955749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.956226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.956245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.956697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.956706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.957158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.957166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.957531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.957540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.958005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.958013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.958570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.958599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.959072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.959082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.959570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.959599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.960043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.960053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.960481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.960509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.960864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.960875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.961467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.961499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.961972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.961981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.962492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.962521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.963016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.963025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.963493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.963522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.963986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.963995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.964596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.964624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.965075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.965085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.965557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.965566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.966026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.966035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.966643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.966672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.967127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.967137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.943 qpair failed and we were unable to recover it. 00:29:08.943 [2024-07-25 15:25:00.967690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.943 [2024-07-25 15:25:00.967719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.968180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.968189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.968755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.968784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.969419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.969448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.969929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.969939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.970590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.970619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.971093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.971104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.971570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.971578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.971806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.971819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.972284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.972292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.972747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.972755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.973118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.973126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.973520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.973528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.973985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.973993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.974463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.974472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.974940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.974949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.975517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.975547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.976014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.976023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.976602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.976631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.976984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.976994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.977549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.977578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.978042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.978052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.978630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.978659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.979123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.979133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.979780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.979808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.980429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.980457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.980917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.980927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.981477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.981506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.981983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.981997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.982442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.982471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.982937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.982946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.983520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.983550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.984098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.984107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.984384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.984393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.984811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.984820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.985279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.985287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.985739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.985747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.986223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.986231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.986702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.986710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.987165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.987173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.987662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.987671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.988126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.988133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.988639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.988648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.989084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.989092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.989613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.989622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.990094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.990102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.990568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.990577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.990799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.990812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.991154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.991162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.991507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.991516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.991967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.991975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.992506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.992535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.993020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.993031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.993259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.993274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.993735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.993744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.994227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.994236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.994696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.994704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.994922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.944 [2024-07-25 15:25:00.994931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.944 qpair failed and we were unable to recover it. 00:29:08.944 [2024-07-25 15:25:00.995408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.995417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.995860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.995868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.996344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.996353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.996790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.996799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.997249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.997258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.997736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.997744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.998219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.998228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.998569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.998577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.999023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.999031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.999488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.999496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:00.999939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:00.999949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.000419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.000428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.000909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.000917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.001421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.001450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.001916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.001926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.002487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.002517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.002991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.003001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.003473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.003502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.003952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.003962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.004513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.004541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.005010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.005020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.005575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.005604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.006082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.006092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.006542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.006551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.006997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.007005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.007504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.007534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.008000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.008010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.008484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.008513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.008877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.008887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.009473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.009502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.009968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.009978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.010511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.010539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.010999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.011009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.011583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.011612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.012074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.012084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.012456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.012464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.012819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.012827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.013227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.013236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.013715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.013722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.014204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.014213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.014592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.014600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.015055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.015062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.015622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.015651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.016124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.016133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.016702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.016732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.017210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.017220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.017676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.017685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.018073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.018081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.018583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.018612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.019077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.019087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.019657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.019689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.020144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.020154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.020746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.020775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.021436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.021464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.021853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.021863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.945 qpair failed and we were unable to recover it. 00:29:08.945 [2024-07-25 15:25:01.022429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.945 [2024-07-25 15:25:01.022458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.022912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.022922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.023414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.023443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.023923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.023933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.024471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.024500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.024720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.024732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.025085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.025094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.025569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.025578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.026065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.026074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.026437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.026446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.026899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.026907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.027481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.027510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.027968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.027977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.028560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.028588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.029067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.029077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.029482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.029509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.029971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.029982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.030531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.030560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.031014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.031024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.031568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.031598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.032078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.032088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.032463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.032471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.032925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.032934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.033506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.033535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.033922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.033932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.034416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.034445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.034950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.034961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.035526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.035555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.036008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.036017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.036555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.036584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.037042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.037051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.037597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.037627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.038080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.038090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.038543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.038552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.039007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.039016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.039581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.039614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.039981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.039990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.040551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.040580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.041029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.041038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.041510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.041539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.041957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.041966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.042506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.042534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.042999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.043009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.043564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.043593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.044045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.044055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.044607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.044637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.044859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.044871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.045346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.045355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.045767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.045775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.046267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.046275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.046508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.046519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.046717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.046727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.047195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.047216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.047676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.047684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.047902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.047911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.048382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.048390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.048737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.048745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.049212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.049220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.946 qpair failed and we were unable to recover it. 00:29:08.946 [2024-07-25 15:25:01.049602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.946 [2024-07-25 15:25:01.049610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.050071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.050079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.050522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.050530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.050978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.050987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.051547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.051576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.052063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.052073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.052614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.052643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.053108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.053118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.053574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.053583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.054077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.054085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.054547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.054556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.054920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.054928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.055476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.055506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.055972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.055982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.056565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.056594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.056951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.056961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.057425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.057454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.057917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.057930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.058500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.058528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.059014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.059024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.059563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.059592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.059974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.059984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.060626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.060655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.061119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.061129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.061579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.061588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.062081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.062090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.062469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.062478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.062955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.062964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.063562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.063591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.064050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.064060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.064611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.064639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.065106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.065117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.065673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.065702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.066171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.066181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.066723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.066752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.067437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.067466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.067972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.067982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.068545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.068574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.069022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.069032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.069605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.069634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.070025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.070035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.070625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.070654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.071122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.071132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.071707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.071736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.072212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.072223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.072826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.072855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.073451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.073480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.073959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.073969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.074211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.074224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.074769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.074798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.075431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.075461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.075925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.075935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.076421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.076450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.076936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.076946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.077430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.077460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.077937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.077948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.078504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.078533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.078997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.079010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.079491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.079520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.079998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.080008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.947 [2024-07-25 15:25:01.080248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.947 [2024-07-25 15:25:01.080262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.947 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.080749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.080758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.081223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.081231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.081715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.081724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.082210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.082218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.082573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.082582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.082900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.082908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.083353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.083361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.083845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.083853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.084076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.084084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.084572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.084580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.085034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.085042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.085608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.085637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.086143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.086152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.086704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.086733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.087213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.087227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.087723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.087732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.088097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.088105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.088570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.088578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.089032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.089040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.089395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.089424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.089880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.089890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.090347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.090356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.090828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.090836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.091136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.091146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.091610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.091619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.092077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.092085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.092440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.092449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.092969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.092978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.093560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.093589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.094038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.094048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.094607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.094636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.095113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.095124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.095532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.095541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.095908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.095917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.096507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.096537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.096894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.096904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.097353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.097365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.097840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.097849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.098230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.098239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.098440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.098453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.098935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.098943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.099409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.099417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.099630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.099640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.100080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.100088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.100367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.100377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.100891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.100900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.101346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.101355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.101833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.101840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.102186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.102194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.102661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.102670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.103133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.103142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.103492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.103500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.103949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.103958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.104438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.104468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.948 [2024-07-25 15:25:01.104984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.948 [2024-07-25 15:25:01.104994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.948 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.105471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.105500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.105981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.105991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.106570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.106599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.107071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.107081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.107615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.107644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.108006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.108016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.108568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.108598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.109129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.109138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.109711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.109740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.110407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.110436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.110899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.110909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.111463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.111492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.111993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.112003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.112567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.112596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.113068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.113078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.113525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.113554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.114009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.114019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.114495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.114524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.114988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.114998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.115567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.115597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.116056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.116066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.116625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.116657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.117125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.117135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.117663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.117692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.118147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.118158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.118652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.118681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.119145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.119155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.119715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.119744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.120091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.120101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.120389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.120397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.120854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.120863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.121196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.121210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.121569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.121576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.122023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.122031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.122472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.122500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.122988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.122998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.123551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.123580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.124061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.124071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.124523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.124551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:08.949 [2024-07-25 15:25:01.124909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.949 [2024-07-25 15:25:01.124919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:08.949 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.125469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.125499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.125981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.125991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.126225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.126243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.126729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.126738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.127188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.127196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.127654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.127662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.128156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.128163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.128695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.128724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.129181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.129192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.129722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.129751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.130211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.130223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.130800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.130829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.131472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.131500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.131986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.131995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.132545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.132574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.133030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.133040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.133603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.133632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.134094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.134104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.134456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.134484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.134994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.135004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.135559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.135588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.135810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.135825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.136240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.136249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.136613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.136621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.137077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.137084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.137541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.219 [2024-07-25 15:25:01.137549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.219 qpair failed and we were unable to recover it. 00:29:09.219 [2024-07-25 15:25:01.138033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.138041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.138470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.138499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.138898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.138908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.139457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.139486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.139950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.139960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.140383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.140412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.140853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.140863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.141235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.141243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.141725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.141733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.142190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.142198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.142645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.142654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.143007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.143015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.143580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.143610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.143963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.143972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.144523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.144552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.145013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.145022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.145246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.145260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.145725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.145733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.146083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.146091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.146317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.146327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.146790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.146798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.147012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.147020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.147521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.147529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.148001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.148010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.148467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.148475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.148940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.148947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.149505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.149534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.149987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.149996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.150485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.150514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.150873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.150884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.151418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.151447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.151809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.151819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.152175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.152183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.152741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.152750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.153400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.153429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.153764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.220 [2024-07-25 15:25:01.153774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.220 qpair failed and we were unable to recover it. 00:29:09.220 [2024-07-25 15:25:01.154237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.154245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.154716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.154724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.155179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.155188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.155672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.155680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.156144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.156152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.156535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.156544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.156998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.157006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.157589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.157619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.158102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.158111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.158606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.158615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.158952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.158960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.159552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.159580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.160052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.160062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.160525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.160553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.161014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.161024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.161599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.161627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.162102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.162112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.162682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.162711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.163187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.163197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.163660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.163689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.164172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.164183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.164737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.164766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.165417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.165446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.165887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.165898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.166419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.166447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.166901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.166911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.167507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.167539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.168023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.168032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.168529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.168558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.169014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.169023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.169473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.169502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.169950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.169961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.170399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.170427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.170885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.170894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.171471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.171500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.171938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.221 [2024-07-25 15:25:01.171948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.221 qpair failed and we were unable to recover it. 00:29:09.221 [2024-07-25 15:25:01.172526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.172555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.173012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.173022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.173568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.173597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.174036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.174045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.174556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.174585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.175053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.175063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.175520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.175549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.175997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.176008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.176444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.176474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.176948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.176958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.177475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.177504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.177983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.177993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.178504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.178532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.178995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.179004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.179628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.179657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.180105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.180114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.180714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.180743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.181110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.181120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.181576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.181586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.182070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.182078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.182635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.182664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.183138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.183148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.183687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.183716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.184186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.184196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.184771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.184800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.185416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.185445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.185810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.185819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.186466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.186495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.186850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.186860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.187322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.187331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.187807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.187818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.188302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.188310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.188668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.188676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.189138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.189146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.189589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.189597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.222 qpair failed and we were unable to recover it. 00:29:09.222 [2024-07-25 15:25:01.190083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.222 [2024-07-25 15:25:01.190092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.190547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.190555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.190908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.190917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.191388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.191396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.191870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.191878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.192335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.192343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.192558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.192569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.193093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.193102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.193327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.193339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.193809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.193818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.194268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.194276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.194759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.194767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.194985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.194994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.195464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.195472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.195933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.195941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.196422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.196430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.196911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.196919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.197502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.197510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.197854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.197862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.198321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.198329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.198802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.198810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.199265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.199274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.199740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.199748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.200079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.200087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.200443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.200453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.200896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.200904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.201334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.201342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.201804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.201813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.202286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.202294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.202761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.202768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.203110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.203118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.203616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.203624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.204061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.204069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.204532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.204561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.205023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.205034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.223 [2024-07-25 15:25:01.205586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.223 [2024-07-25 15:25:01.205620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.223 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.206073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.206082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.206633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.206661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.207129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.207139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.207697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.207726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.208173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.208183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.208645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.208674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.209140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.209150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.209653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.209662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.210133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.210142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.210484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.210514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.210974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.210983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.211467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.211496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.211945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.211955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.212549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.212578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.213088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.213099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.213529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.213537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.213982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.213992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.214535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.214564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.215027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.215036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.215639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.215668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.216142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.216152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.216545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.216574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.216987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.216997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.217445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.217474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.217953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.217963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.218515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.218545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.219011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.219021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.219632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.219660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.220142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.220152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.224 [2024-07-25 15:25:01.220749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.224 [2024-07-25 15:25:01.220778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.224 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.221390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.221419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.221781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.221792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.222262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.222270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.222602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.222611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.222973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.222981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.223482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.223491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.223966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.223974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.224565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.224594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.225079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.225089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.225560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.225572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.226045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.226054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.226530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.226558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.227068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.227078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.227570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.227599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.228074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.228084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.228571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.228580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.228937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.228945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.229437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.229467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.229946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.229955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.230495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.230524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.230985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.230996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.231563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.231593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.232065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.232075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.232545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.232573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.233024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.233033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.233486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.233515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.233990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.234000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.234798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.234817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.235414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.235443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.235902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.235911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.236488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.236517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.236977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.236987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.237379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.237388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.237850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.237859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.238446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.238475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.238934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.225 [2024-07-25 15:25:01.238944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.225 qpair failed and we were unable to recover it. 00:29:09.225 [2024-07-25 15:25:01.239499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.239528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.240046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.240056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.240612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.240640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.241097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.241107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.241654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.241684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.242187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.242197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.242670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.242679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.243193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.243209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.243732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.243761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.244417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.244446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.244979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.244989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.245543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.245572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.245931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.245941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.246515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.246547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.246768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.246779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.247229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.247238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.247703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.247711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.247904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.247913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.248365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.248374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.248829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.248837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.249335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.249343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.249823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.249831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.250185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.250192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.250640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.250648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.250979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.250987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.251418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.251427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.251743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.251752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.252207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.252215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.252677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.252685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.252907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.252917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.253381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.253390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.253612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.253623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.254080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.254088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.254301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.254311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.254795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.254803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.255017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.226 [2024-07-25 15:25:01.255026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.226 qpair failed and we were unable to recover it. 00:29:09.226 [2024-07-25 15:25:01.255396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.255411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.255856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.255864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.256305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.256314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.256770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.256778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.257227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.257236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.257703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.257711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.258174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.258182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.258650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.258659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.259018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.259026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.259478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.259487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.259963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.259971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.260434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.260463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.260849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.260859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.261360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.261368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.261837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.261845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.262413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.262442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.262908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.262919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.263377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.263389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.263860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.263868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.264433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.264462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.264930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.264940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.265396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.265406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.265871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.265880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.266432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.266461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.266924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.266933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.267508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.267537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.267994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.268003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.268620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.268650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.269106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.269116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.269572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.269580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.270027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.270034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.270609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.270637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.271089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.271099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.271677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.271706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.272150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.272160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.227 qpair failed and we were unable to recover it. 00:29:09.227 [2024-07-25 15:25:01.272605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.227 [2024-07-25 15:25:01.272614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.273118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.273126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.273618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.273648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.274132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.274142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.274597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.274605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.275108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.275117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.275666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.275695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.276125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.276136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.276606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.276616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.277075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.277083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.277640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.277669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.278146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.278156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.278682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.278711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.279183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.279194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.279755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.279784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.280180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.280190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.280694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.280723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.281123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.281133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.281691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.281720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.282165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.282175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.282731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.282760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.283435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.283463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.283933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.283946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.284519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.284548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.285097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.285108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.285421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.285429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.285777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.285786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.286265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.286273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.286619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.286627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.287089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.287097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.287566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.287574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.228 qpair failed and we were unable to recover it. 00:29:09.228 [2024-07-25 15:25:01.288051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.228 [2024-07-25 15:25:01.288059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.288415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.288423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.288880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.288888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.289456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.289484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.289964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.289975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.290502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.290531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.290983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.290993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.291580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.291609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.292109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.292119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.292421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.292429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.292884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.292892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.293432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.293461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.293904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.293913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.294364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.294373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.294821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.294829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.295174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.295182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.295676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.295687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.296088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.296096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.296555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.296563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.297028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.297036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.297567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.297597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.298051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.298061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.298622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.298650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.299196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.299212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.299762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.299791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.300424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.300453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.300809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.300819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.301415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.301444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.301929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.229 [2024-07-25 15:25:01.301939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.229 qpair failed and we were unable to recover it. 00:29:09.229 [2024-07-25 15:25:01.302162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.302174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.302671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.302680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.303152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.303163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.303532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.303540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.303992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.304000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.304564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.304593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.305072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.305082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.305659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.305688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.306151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.306161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.306702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.306731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.306979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.306992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.307557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.307586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.308072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.308083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.308553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.308583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.309081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.309091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.309557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.309566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.310021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.310029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.310569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.310598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.310807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.310819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.311287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.311296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.311666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.311675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.312132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.312140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.312329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.312339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.312716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.312724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.312952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.312962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.313447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.313456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.313876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.313884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.314362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.314371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.314798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.314807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.315251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.315259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.315727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.315735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.316187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.316195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.316651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.316659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.317115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.317123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.230 [2024-07-25 15:25:01.317529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.230 [2024-07-25 15:25:01.317538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.230 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.318015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.318023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.318475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.318505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.319018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.319028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.319529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.319558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.320033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.320043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.320626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.320655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.321085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.321095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.321665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.321697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.322094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.322104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.322566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.322574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.323033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.323041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.323526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.323556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.323909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.323918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.324391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.324420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.324897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.324908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.325474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.325503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.325858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.325868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.326316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.326325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.326783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.326791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.327262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.327270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.327741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.327749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.328113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.328121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.328529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.328538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.329016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.329025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.329490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.329498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.329950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.329958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.330511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.330540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.331012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.331022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.331502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.331531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.332083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.332092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.332565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.332573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.333031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.333041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.333535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.333564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.333909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.231 [2024-07-25 15:25:01.333919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.231 qpair failed and we were unable to recover it. 00:29:09.231 [2024-07-25 15:25:01.334541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.334572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.335051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.335061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.335622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.335651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.335997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.336007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.336487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.336515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.336982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.336991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.337483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.337512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.337873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.337884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.338475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.338504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.338833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.338843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.339314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.339322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.339796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.339804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.340175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.340182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.340529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.340541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.340989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.340997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.341481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.341510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.341977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.341986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.342472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.342501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.342962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.342971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.343449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.343478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.343951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.343961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.344518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.344546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.344890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.344901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.345330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.345339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.345819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.345828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.346315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.346323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.346789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.346798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.347249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.347258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.347597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.347605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.347931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.347941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.348419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.348428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.348885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.348894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.349362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.349371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.349840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.349848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.350206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.350215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.232 [2024-07-25 15:25:01.350564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.232 [2024-07-25 15:25:01.350573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.232 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.351030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.351038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.351595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.351624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.352074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.352084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.352562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.352571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.352895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.352904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.353404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.353433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.353872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.353881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.354208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.354218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.354445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.354457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.354941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.354950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.355426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.355456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.355916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.355925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.356496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.356524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.357049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.357059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.357604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.357633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.358002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.358012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.358564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.358593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.359101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.359114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.359529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.359538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.359995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.360003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.360431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.360460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.360925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.360935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.361507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.361536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.361879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.361890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.362471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.362500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.363047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.363057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.363583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.363612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.364115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.364126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.364564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.364573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.364916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.364925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.365438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.365468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.365983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.365993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.366544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.366573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.367039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.367048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.367534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.233 [2024-07-25 15:25:01.367563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.233 qpair failed and we were unable to recover it. 00:29:09.233 [2024-07-25 15:25:01.368030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.368040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.368426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.368455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.368935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.368945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.369487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.369515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.369852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.369862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.370410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.370439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.370809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.370819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.371307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.371316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.371790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.371798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.372269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.372278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.372726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.372735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.373176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.373184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.373719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.373728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.373948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.373962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.374318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.374326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.374805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.374813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.375257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.375266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.375604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.375611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.376060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.376068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.376436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.376445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.376907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.376915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.377382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.377391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.377765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.377776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.378264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.378272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.378715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.378724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.379179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.379187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.379574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.379583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-07-25 15:25:01.380051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-07-25 15:25:01.380059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.380608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.380637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.381089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.381099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.381521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.381529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.382018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.382027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.382567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.382596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.383089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.383099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.383576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.383584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.383947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.383956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.384513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.384542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.385001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.385010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.385563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.385591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.386068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.386077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.386622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.386651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.387109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.387119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.387654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.387682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.388171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.388181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.388723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.388753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.389215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.389227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.389695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.389704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.390191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.390199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.390657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.390665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.391119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.391127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.391599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.391627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.392106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.392116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.392588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.392597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.393048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.393056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.393603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.393632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.394112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.394122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.394611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.394640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.395099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.395108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.395565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.395574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.396054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.396063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.396608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.396637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-07-25 15:25:01.397099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-07-25 15:25:01.397109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.236 [2024-07-25 15:25:01.397664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-07-25 15:25:01.397693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-07-25 15:25:01.398189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-07-25 15:25:01.398205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-07-25 15:25:01.398702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-07-25 15:25:01.398710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-07-25 15:25:01.399164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-07-25 15:25:01.399173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-07-25 15:25:01.399718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-07-25 15:25:01.399747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-07-25 15:25:01.400413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-07-25 15:25:01.400441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.236 [2024-07-25 15:25:01.400893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.236 [2024-07-25 15:25:01.400903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.236 qpair failed and we were unable to recover it. 00:29:09.505 [2024-07-25 15:25:01.401425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-07-25 15:25:01.401454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-07-25 15:25:01.401909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.401919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.402417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.402446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.402944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.402954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.403539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.403568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.404032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.404042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.404613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.404642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.404988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.404998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.405395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.405423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.405869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.405879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.406417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.406446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.406890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.406899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.407359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.407367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.407833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.407842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.408328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.408336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.408790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.408799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.409189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.409197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.409537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.409545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.409733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.409746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.410186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.410195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.410414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.410428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.410893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.410901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.411368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.411376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.411826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.411834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.412282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.412290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.412509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.412518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.412983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.412991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.413464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.413472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.413913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.413921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.414365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.414373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.414841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.414848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.415297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.415307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.415766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.415775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.416267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.416275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.416716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.416724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.417253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.417261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-07-25 15:25:01.417723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-07-25 15:25:01.417731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.418184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.418191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.418660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.418668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.419113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.419121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.419582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.419592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.420035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.420043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.420613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.420643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.421098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.421107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.421552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.421561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.422009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.422017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.422585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.422614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.423079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.423088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.423535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.423544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.423980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.423987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.424552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.424581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.425044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.425053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.425597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.425627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.426087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.426096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.426648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.426677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.427137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.427147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.427461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.427471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.427931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.427939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.428506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.428535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.428996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.429006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.429553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.429585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.430043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.430053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.430489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.430518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.430980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.430990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.431219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.431232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.431429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.431440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.431906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.431914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.432454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.432483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.432696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.432708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.433183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.433191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.433543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.433552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.433998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.434006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.434450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-07-25 15:25:01.434479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-07-25 15:25:01.434835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.434844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.435323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.435332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.435781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.435789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.436130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.436138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.436606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.436615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.437085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.437094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.437561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.437569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.437909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.437918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.438376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.438384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.438861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.438869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.439314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.439322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.439636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.439654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.440104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.440112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.440556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.440564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.441012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.441021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.441480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.441488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.441937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.441945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.442504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.442533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.442998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.443008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.443572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.443601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.444063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.444073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.444607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.444636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.445094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.445104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.445676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.445705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.446164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.446175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.446739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.446768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.447406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.447435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.447895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.447908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.448453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.448482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.448958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.448968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.449537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.449567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.450026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.450035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.450591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.450620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.451097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.451107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.451579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.451589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.451952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-07-25 15:25:01.451961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-07-25 15:25:01.452495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.452524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.453004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.453014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.453559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.453588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.454057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.454067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.454603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.454633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.455063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.455073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.455607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.455635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.456099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.456109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.456513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.456542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.457009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.457019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.457558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.457587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.458048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.458058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.458612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.458641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.459122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.459132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.459671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.459701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.460165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.460175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.460629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.460658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.461130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.461139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.461605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.461614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.462060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.462068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.462599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.462628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.462850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.462862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.463224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.463241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.463723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.463732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.464182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.464190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.464661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.464669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.465190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.465197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.465484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.465513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.465741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.465752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.466217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.466226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.466438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.466447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.466951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.466963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.467419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.467428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.467857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.467865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.468340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.468348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.468800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-07-25 15:25:01.468808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-07-25 15:25:01.469258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.469266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.469656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.469665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.470118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.470126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.470602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.470610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.471047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.471055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.471551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.471559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.472019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.472027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.472596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.472625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.473091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.473101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.473552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.473561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.474009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.474017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.474555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.474584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.475034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.475043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.475618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.475647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.476110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.476120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.476669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.476697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.477159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.477170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.477736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.477765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.478416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.478445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.478906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.478916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.479475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.479504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.479862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.479872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.480101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.480115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.480546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.480554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.480873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.480883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.481249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.481257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.481736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.481744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.482190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.482198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.482673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.482681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-07-25 15:25:01.483117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-07-25 15:25:01.483125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.483580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.483589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.483805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.483815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.484270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.484278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.484742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.484750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.485192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.485204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.485620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.485631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.486083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.486091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.486531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.486540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.487000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.487008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.487542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.487571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.488033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.488042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.488519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.488548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.488906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.488917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.489465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.489494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.489954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.489964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.490534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.490563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.491032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.491042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.491592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.491621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.492090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.492100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.492567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.492576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.493030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.493038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.493589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.493617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.494063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.494074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.494621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.494651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.495097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.495106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.495630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.495659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.496123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.496133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.496591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.496600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.497051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.497059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.497607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.497636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.498098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.498108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.498644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.498672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.499135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.499145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.499599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.499609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.500066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.500074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-07-25 15:25:01.500623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-07-25 15:25:01.500652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.501109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.501119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.501653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.501683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.502143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.502152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.502685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.502714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.503172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.503182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.503727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.503756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.504226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.504245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.504684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.504693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.505138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.505146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.505607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.505619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.506067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.506075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.506638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.506666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.507015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.507025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.507563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.507592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.508053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.508063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.508639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.508668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.509132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.509141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.509694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.509723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.510185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.510195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.510776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.510805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.511405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.511434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.511795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.511806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.512407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.512435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.512912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.512922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.513469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.513498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.513963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.513973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.514520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.514549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.515028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.515038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.515457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.515486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.515952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.515963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.516508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.516537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.516981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.516991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.517611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.517640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.517861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.517874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.518310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.518319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-07-25 15:25:01.518811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-07-25 15:25:01.518819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.519261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.519269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.519718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.519726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.519944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.519954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.520416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.520425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.520882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.520890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.521339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.521348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.521871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.521879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.522307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.522316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.522765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.522774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.523236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.523244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.523687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.523695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.524158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.524166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.524614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.524622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.525070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.525081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.525644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.525673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.526147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.526156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.526692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.526721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.527360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.527390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.527849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.527859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.528429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.528458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.528917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.528927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.529384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.529393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.529708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.529717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.530186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.530194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.530699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.530708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.531038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.531047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.531588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.531617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.532093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.532102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.532574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.532583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.533041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.533049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.533668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.533697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.534168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.534178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.534713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.534741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.535206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.535217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.535762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.535790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-07-25 15:25:01.536412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-07-25 15:25:01.536441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.536893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.536903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.537452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.537481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.537584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.537596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.538056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.538065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.538518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.538527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.538983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.538991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.539533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.539562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.539789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.539801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.540273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.540282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.540745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.540753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.541194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.541205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.541673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.541682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.542131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.542140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.542590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.542598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.543048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.543057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.543613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.543642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.544100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.544109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.544466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.544498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.544959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.544969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.545601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.545629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.546089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.546099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.546572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.546580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.547033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.547041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.547606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.547636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.548097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.548107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.548580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.548590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.549047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.549055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.549613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.549641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.550108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.550118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.550653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.550682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.551140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.551151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.551716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.551744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.552196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.552213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.552663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.552691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.553045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.553055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.553489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-07-25 15:25:01.553517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-07-25 15:25:01.554005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.554016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.554548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.554577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.555074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.555084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.555502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.555511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.555969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.555977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.556529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.556558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.557030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.557040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.557489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.557518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.557981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.557992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.558577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.558605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.559061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.559071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.559501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.559530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.559927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.559938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.560488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.560517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.561015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.561025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.561605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.561634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.562099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.562108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.562652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.562681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.563036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.563047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.563617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.563646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.564101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.564111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.564460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.564473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.564909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.564917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.565477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.565506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.565970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.565980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.566562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.566592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.567066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.567076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.567662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.567691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.568195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.568211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.568737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.568765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-07-25 15:25:01.569368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-07-25 15:25:01.569396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.569848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.569858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.570423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.570452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.570909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.570919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.571448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.571477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.571971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.571980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.572528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.572557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.572914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.572925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.573369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.573398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.573846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.573856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.574344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.574353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.574703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.574711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.574915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.574927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.575075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.575086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.575520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.575529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.575974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.575982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.576206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.576216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.576672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.576680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.577218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.577227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.577683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.577692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.578174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.578182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.578656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.578665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.579116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.579125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.579601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.579609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.580054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.580063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.580439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.580468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.580956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.580966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.581513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.581542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.582042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.582052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.582596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.582625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.583097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.583108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.583652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.583685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.584130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.584140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.584589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.584599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.584849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.584858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.585307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.585316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.585800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-07-25 15:25:01.585810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-07-25 15:25:01.586280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.586288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.586646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.586654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.587110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.587118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.587561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.587570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.588042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.588050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.588507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.588516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.589028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.589036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.589585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.589614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.590101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.590111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.590657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.590686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.591156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.591166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.591734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.591763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.592120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.592130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.592587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.592596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.592946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.592954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.593490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.593519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.593968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.593978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.594555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.594584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.595085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.595095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.595535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.595544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.595975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.595984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.596532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.596561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.596986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.596996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.597588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.597617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.598077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.598087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.598566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.598575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.598796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.598808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.599267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.599275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.599717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.599725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.600227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.600236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.600686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.600694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.601145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.601154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.601670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.601677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.602171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.602178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.602629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.602640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.603082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-07-25 15:25:01.603090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-07-25 15:25:01.603440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.603449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.603894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.603903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.604243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.604250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.604580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.604589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.605059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.605066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.605425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.605434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.605883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.605891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.606471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.606499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.606971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.606980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.607579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.607609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.608069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.608079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.608610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.608639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.609124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.609134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.609671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.609700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.610150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.610160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.610703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.610733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.611210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.611220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.611667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.611675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.612123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.612131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.612686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.612715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.613198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.613215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.613760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.613789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.614361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.614390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.614853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.614862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.615439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.615468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.615933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.615943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.616481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.616510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.616971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.616982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.617551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.617579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.617921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.617931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.618482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.618511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.618972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.618981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.619556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.619585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.620087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.620097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.620577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.620586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.621032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-07-25 15:25:01.621040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-07-25 15:25:01.621605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.621634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.622091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.622101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.622577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.622590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.623040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.623049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.623588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.623616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.624076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.624085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.624728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.624756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.625160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.625170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.625717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.625746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.626214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.626225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.626663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.626671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.627128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.627136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.627664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.627692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.628039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.628049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.628508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.628537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.628999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.629009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.629581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.629610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.630073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.630083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.630451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.630478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.630953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.630964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.631538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.631567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.632033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.632043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.632586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.632616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.633079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.633089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.633663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.633691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.634143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.634154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.634611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.634619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.635118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.635127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.635661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.635690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.636148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.636159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.636613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.636623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.637063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.637071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.637536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.637564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.638028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.638038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.638586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.638615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.639119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.639129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-07-25 15:25:01.639667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-07-25 15:25:01.639696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.640037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.640047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.640664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.640693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.641207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.641217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.641767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.641796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.642409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.642438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.642896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.642906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.643459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.643487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.643960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.643970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.644451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.644479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.644951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.644961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.645513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.645542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.645902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.645912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.646444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.646473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.646810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.646821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.647271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.647280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.647790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.647797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.648239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.648248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.648699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.648707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.649158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.649167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.649659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.649667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.650133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.650142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.650496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.650504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.650723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.650735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.651186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.651195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.651626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.651634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.652074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.652083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.652648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.652678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.653188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.653198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.653783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.653811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.654404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.654433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-07-25 15:25:01.654894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-07-25 15:25:01.654904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.655412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.655441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.655908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.655921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.656373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.656402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.656865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.656875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.657453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.657482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.657837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.657846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.658305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.658314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.658775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.658783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.659254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.659262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.659714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.659722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.660165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.660174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.660658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.660667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.661127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.661136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.661588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.661597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.662047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.662056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.662603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.662631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.663105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.663115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.663653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.663682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.664144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.664154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.664611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.664620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.665094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.665103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.665652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.665681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.666139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.666150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.666604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.666614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.666957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.666966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.667513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.667542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.668003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.668012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.668559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.668588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.669064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.669074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.669644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.669673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.670134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.670144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.670695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.670723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.671197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.671213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.671760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.671789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.672402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.672431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-07-25 15:25:01.672888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-07-25 15:25:01.672898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.673409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.673438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.673899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.673908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.674501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.674529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.674988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.674998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.675573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.675602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.676066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.676079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.676615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.676644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.677110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.677119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.677651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.677680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.678139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.678149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.678606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.678614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.679068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.679077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.679630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.679658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.680121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.680131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.680669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.680698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.680915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.680926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.681400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.681409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.681867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.681876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.682405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.682434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.682908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.682918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.683392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.683400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.683856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.683864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.684084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.684096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.684559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.684568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.685043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.685051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.685503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.685512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.686038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.686046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.686525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.686554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.687025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.687034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-07-25 15:25:01.687586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-07-25 15:25:01.687616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.791 [2024-07-25 15:25:01.688075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.688086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.688560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.688569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.689051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.689059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.689601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.689629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.690111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.690122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.690659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.690687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.691165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.691175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.691714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.691743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.692215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.692226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.692775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.692784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.693390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.693419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.693944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.693954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.694493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.694522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.694982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.694992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.695400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.695428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.695895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.695908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.696526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.696554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.697018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.697027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.697596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.697625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.698077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.698087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.698552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.698560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.699001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.699008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.699573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.699602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.700061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.700070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.700689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.700718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.700939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.700952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.701528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.701557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.701913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.701923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.702470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.702499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.702974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.702984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.703544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.703572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.704031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.704041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.704572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.704601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.705065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.705074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.705631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.705659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-07-25 15:25:01.706120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-07-25 15:25:01.706129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.706581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.706611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.707072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.707082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.707532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.707562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.708022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.708031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.708577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.708606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.709064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.709074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.709635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.709664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.710130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.710140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.710711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.710739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.711215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.711230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.711725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.711734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.712187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.712196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.712667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.712676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.713124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.713133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.713670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.713698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.714156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.714166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.714706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.714735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.715091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.715101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.715464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.715472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.715931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.715945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.716527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.716556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.717017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.717026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.717598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.717627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.718091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.718100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.718583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.718591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.719043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.719052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.719611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.719639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.720098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.720108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.720657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.720686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.721152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.721163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.721611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.721620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.722075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.722083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.722643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.722672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.723141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.723150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.723675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.723704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-07-25 15:25:01.724163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-07-25 15:25:01.724174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.724712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.724742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.725210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.725222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.725743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.725771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.726405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.726434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.726896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.726906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.727461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.727490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.727965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.727975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.728540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.728569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.728923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.728932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.729487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.729516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.729990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.730000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.730530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.730559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.731019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.731029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.731569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.731598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.731954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.731963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.732413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.732442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.732902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.732913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.733457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.733486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.733964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.733973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.734518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.734546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.735012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.735022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.735571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.735600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.736076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.736086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.736573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.736585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.736945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.736954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.737492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.737521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.737996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.738006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.738559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.738589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.739050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.739059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.739603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.739631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.740108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.740117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.740564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.740592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.741058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.741068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.741605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.741635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.742117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-07-25 15:25:01.742127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-07-25 15:25:01.742670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.742699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.743161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.743170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.743709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.743738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.744214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.744225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.744688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.744697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.745159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.745167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.745725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.745755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.746413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.746443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.746905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.746915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.747468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.747497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.747950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.747960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.748502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.748531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.748989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.748999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.749552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.749582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.750044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.750054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.750657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.750687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.751145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.751155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.751700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.751730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.752195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.752214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.752772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.752802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.753406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.753435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.753897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.753907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.754496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.754526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.754745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.754758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.755213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.755223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.755675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.755685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.755903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.755913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.756376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.756385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.756836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.756848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.757302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.757311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.757766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.757774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.758247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.758256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.758706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.758715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.759165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.759174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.759621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.759630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.760060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-07-25 15:25:01.760069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-07-25 15:25:01.760603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.760633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.761089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.761099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.761577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.761587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.762044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.762053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.762633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.762662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.763121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.763132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.763683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.763712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.764159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.764169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.764777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.764806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.765394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.765422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.765885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.765896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.766464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.766493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.766954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.766964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.767495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.767524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.767875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.767886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.768350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.768359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.768814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.768822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.769273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.769282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.769731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.769739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.770184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.770192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.770640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.770648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.771113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.771121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.771585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.771595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.772030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.772039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.772592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.772622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.773080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.773090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.773558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.773567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.774034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.774042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.774587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.774616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.775087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-07-25 15:25:01.775097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-07-25 15:25:01.775574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.775584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.776059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.776067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.776610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.776643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.777160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.777170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.777705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.777735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.778224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.778242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.778650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.778659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.779153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.779161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.779601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.779610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.780083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.780091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.780564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.780572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.781025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.781034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.781574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.781603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.782075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.782086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.782554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.782563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.783021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.783028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.783570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.783599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.784080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.784089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.784562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.784571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.785024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.785032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.785569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.785597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.786080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.786091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.786657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.786666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.787113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.787121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.787680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.787708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.788191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.788205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.788655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.788684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.789145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.789156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.789741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.789770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.790409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.790438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.790884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.790894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.791453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.791481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-07-25 15:25:01.791948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-07-25 15:25:01.791957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.792527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.792556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.793015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.793025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.793577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.793608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.794073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.794083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.794512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.794521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.794971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.794980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.795394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.795424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.795877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.795888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.796465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.796493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.796849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.796862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.797312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.797321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.797638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.797648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.797871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.797884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.798390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.798399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.798854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.798862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.799319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.799328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.799799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.799807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.800335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.800345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.800775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.800784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.801239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.801247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.801476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.801486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.801822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.801830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.802368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.802376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.802585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.802594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.803053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.803061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.803515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.803524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.803965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.803973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.804425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.804434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.804863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.804871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.805227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.805235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-07-25 15:25:01.805694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-07-25 15:25:01.805703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.806161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.806170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.806622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.806632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.806989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.806998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.807541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.807571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.808043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.808054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.808572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.808602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.809073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.809083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.809542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.809552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.810000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.810009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.810579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.810610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.811073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.811083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.811536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.811546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.811995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.812004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.812548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.812578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.813139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.813150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.813698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.813728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.814420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.814450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.814975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.814986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.815526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.815561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.816065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.816076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.816603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.816632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.817122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.817132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.817591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.817619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.818079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.818089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.818552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.818560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.819039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.819047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.819592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.819621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.820081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.820091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.820537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.820546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.821003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.821012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.821543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.821571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.822036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.822046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.822500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.822529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.823012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.823022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.823441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-07-25 15:25:01.823470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-07-25 15:25:01.823944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.823954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.824516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.824544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.824899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.824910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.825508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.825536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.825998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.826008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.826556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.826586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.827071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.827081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.827621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.827650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.828148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.828158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.828699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.828728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.829209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.829219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.829789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.829818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.830110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.830120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.830666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.830695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.831154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.831163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.831706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.831737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.832195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.832222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.832669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.832677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.833151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.833159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.833751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.833780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.834410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.834439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.834901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.834911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.835515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.835543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.835900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.835913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.836495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.836524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.837031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.837041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.837636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.837665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.838135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.838145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.838678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.838707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.839170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-07-25 15:25:01.839179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-07-25 15:25:01.839725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.839754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.840100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.840110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.840584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.840594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.841031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.841040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.841582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.841611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.842078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.842088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.842546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.842555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.843012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.843021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.843542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.843570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.844031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.844041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.844599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.844628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.845100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.845110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.845657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.845687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.846205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.846215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.846641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.846649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.846946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.846955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.847504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.847533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.847993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.848004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.848553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.848582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.849063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.849072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.849543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.849572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.850031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.850041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.850488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.850517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.850992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.851002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.851431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.851461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.851920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.851930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.852464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.852493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.852975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.852985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.853416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.853445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.853662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.853675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.854147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.854156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.854612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.854621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.855095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.855103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.855562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-07-25 15:25:01.855575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-07-25 15:25:01.856030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.856039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.856652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.856681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.857215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.857226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.857704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.857713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.858222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.858235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.858693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.858702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.859174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.859183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.859631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.859639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.860090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.860098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.860549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.860558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.861037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.861046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.861590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.861620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.862081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.862091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.862648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.862657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.863105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.863113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.863572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.863581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.864037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.864046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.864604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.864633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.864994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.865005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.865554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.865583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.866111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.866121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.866632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.866661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.867135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.867147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.867609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.867618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.868070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.868078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.868614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.868642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.869120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.869130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-07-25 15:25:01.869676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-07-25 15:25:01.869705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.870170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.870181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.870733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.870762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.871406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.871435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.871895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.871905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.872408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.872437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.872897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.872907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.873479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.873508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.873858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.873870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.874323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.874331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.874804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.874812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.875289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.875299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.875777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.875789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.876236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.876244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.876715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.876724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.877204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.877212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.877670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.877678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.878128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.878136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.878580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.878589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.879058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.879066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.879617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.879647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.880107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.880118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.880656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.880686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.881162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.881172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.881763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.881792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.882405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.882434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.882913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.882923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.883489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.883518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.883979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.883989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-07-25 15:25:01.884539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-07-25 15:25:01.884567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.884919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.884929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.885477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.885506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.885965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.885975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.886524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.886553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.887014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.887023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.887596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.887625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.888143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.888153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.888687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.888716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.889175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.889185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.889788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.889817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.890390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.890419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.890881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.890890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.891456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.891485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.891962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.891972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.892459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.892488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.892947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.892957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.893505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.893534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.894007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.894016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.894611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.894641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.895109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.895119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.895579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.895589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.896056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.896063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.896603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.896632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.896847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.896856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.897459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.897488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.897972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.897981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.898561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.898589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.899049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.899060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.899610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.899639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.900123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.900133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.900671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.900700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.901164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.901175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.901587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.901616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.902092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.902102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.902580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-07-25 15:25:01.902590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-07-25 15:25:01.903038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.903047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.903594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.903623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.904089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.904098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.904608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.904617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.905070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.905078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.905514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.905543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.906016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.906026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.906622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.906651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.907113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.907122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.907702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.907732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.908209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.908219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.908742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.908770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.909406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.909435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.909902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.909911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.910404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.910436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.910900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.910910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.911464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.911493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.911953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.911962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.912528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.912557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.913028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.913038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.913591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.913621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.913843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.913855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.914327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.914335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.914798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.914807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.915255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.915264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.915716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.915724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.916192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.916204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.916673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.916681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.917158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.917166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.917557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.917566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.918041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-07-25 15:25:01.918049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-07-25 15:25:01.918602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.918631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.919093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.919102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.919577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.919587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.919810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.919822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.920052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.920062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.920517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.920526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.920975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.920983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.921386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.921415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.921637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.921650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.921854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.921865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.922275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.922284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.922777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.922786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.923245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.923254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.923702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.923711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.924204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.924212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.924657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.924665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.925120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.925128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.925580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.925590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.925936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.925944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.926418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.926426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.926955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.926964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.927514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.927543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.928002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.928011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.928572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.928604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.929063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.929073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.929622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.929650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.930046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.930056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.930614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.930643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.931103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.931113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.931648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.931677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.932139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.932149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.932724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-07-25 15:25:01.932753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-07-25 15:25:01.933223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.933242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.933703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.933711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.934165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.934173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.934654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.934663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.935111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.935120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.935577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.935586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.936036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.936044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.936499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.936528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.936988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.936998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.937549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.937578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.938042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.938051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.938618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.938646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.939109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.939118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.939661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.939691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.940154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.940163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.940701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.940730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.941196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.941220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.941406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.941414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.941837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.941845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.942401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.942430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.942889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.942899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.943339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.943347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.943686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.943694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.944145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.944152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.944417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.944425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.944866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.944874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.945223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.945232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.945578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.945586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.946038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.946046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-07-25 15:25:01.946498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-07-25 15:25:01.946506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.946948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.946956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.947496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.947529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.948003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.948012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.948563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.948592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.949051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.949061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.949609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.949639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.950123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.950133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.950671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.950700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.951165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.951175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.951715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.951744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.952404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.952433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.952846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.952856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.953404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.953433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.953892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.953902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.954380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.954388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.954845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.954853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.955308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.955316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.955765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.955773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.956244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.956252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.956703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.956712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.957161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.957170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.957611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.957619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.958090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.958099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.958571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.958579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.959040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.959048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.959600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.959629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.960142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.960152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.960683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.960711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.961177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.961187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.961721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.961750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.962225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.962244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.962547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.962556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.963006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.963014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.963460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-07-25 15:25:01.963469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-07-25 15:25:01.963971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.963979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.964518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.964546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.965004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.965013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.965557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.965586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.966059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.966069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.966619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.966649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.967106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.967116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.967432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.967464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.967908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.967918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.968466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.968495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.968954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.968963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.969145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.969157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.969622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.969631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.970156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.970165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.970375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.970385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.970838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.970846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.971065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.971075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.971518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.971526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.971993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.972001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.972530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.972558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.973031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.973040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.973595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.973625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-07-25 15:25:01.974082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-07-25 15:25:01.974091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.974463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.974474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.974948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.974957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.975505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.975534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.975998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.976008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.976580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.976610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.977087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.977096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.977549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.977558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.978044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.978052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.978606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.978635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.979112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.979123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.979667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.979697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.980159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.980170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.980783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.980813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.981381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.981411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.981861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.981870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.982493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.982522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.982862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.982872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.983352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.983366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.983825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.983833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.984287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.984296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.984751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.984759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.985234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.985243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.985677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.985685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.986071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.986079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.986523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.986535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.987007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.987015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.987555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.987584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.988044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.080 [2024-07-25 15:25:01.988054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.080 qpair failed and we were unable to recover it. 00:29:10.080 [2024-07-25 15:25:01.988603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.988632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.989105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.989114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.989665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.989694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.990206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.990216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.990671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.990681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.991155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.991164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.991739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.991769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.992216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.992228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.992682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.992690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.993170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.993178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.993634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.993643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.994092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.994100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.994650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.994678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.995153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.995163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.995622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.995631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.995979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.995987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.996544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.996573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.997049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.997060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.997607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.997636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.998111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.998121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.998573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.998602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.999075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.999084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:01.999536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:01.999545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.000003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.000012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.000550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.000579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.001054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.001064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.001618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.001647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.002159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.002169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.002712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.002740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.003174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.003185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.003725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.003754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.004362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.004390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.004853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.004863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.005432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.005462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.005923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.081 [2024-07-25 15:25:02.005933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.081 qpair failed and we were unable to recover it. 00:29:10.081 [2024-07-25 15:25:02.006476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.006505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.006955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.006969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.007489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.007519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.007971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.007981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.008446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.008475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.008931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.008941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.009522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.009552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.010009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.010019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.010565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.010594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.011050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.011060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.011613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.011643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.012105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.012115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.012661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.012691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.013148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.013158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.013634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.013643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.014095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.014103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.014652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.014681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.015040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.015050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.015614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.015642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.015988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.015998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.016542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.016571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.017035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.017044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.017608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.017637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.018087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.018097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.018661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.018689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.019147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.019157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.019629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.019638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.020143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.020151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.020682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.020711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.021224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.021243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.021681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.021690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.022138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.022146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.022595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.022603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.022801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.022813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.023259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.023268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.082 qpair failed and we were unable to recover it. 00:29:10.082 [2024-07-25 15:25:02.023717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.082 [2024-07-25 15:25:02.023725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.024176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.024184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.024623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.024631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.025101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.025110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.025467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.025475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.025924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.025934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.026384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.026397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.026904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.026913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.027027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.027038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.027446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.027455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.027904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.027911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.028381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.028390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.028609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.028619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.029109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.029118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.029568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.029576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.030049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.030058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.030509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.030518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.030627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.030636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.031087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.031095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.031570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.031579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.032050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.032059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.032508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.032516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.032967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.032975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.033539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.033568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.034037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.034047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.034599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.034628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.035088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.035098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.035572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.035581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.036031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.036040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.036580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.036609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.037066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.037076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.037635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.037664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.038124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.038134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.038753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.038782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.039361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.039390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.039863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.083 [2024-07-25 15:25:02.039873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.083 qpair failed and we were unable to recover it. 00:29:10.083 [2024-07-25 15:25:02.040442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.040472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.040806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.040815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.041260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.041269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.041748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.041756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.042209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.042217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.042671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.042679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.043111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.043119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.043564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.043574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.044017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.044025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.044371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.044380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.044830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.044841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.045313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.045322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.045778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.045786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.046236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.046244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.046700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.046707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.047189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.047197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.047670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.047678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.048133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.048141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.048596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.048605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.048954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.048962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.049484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.049514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.049976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.049986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.050434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.050463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.050939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.050948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.051512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.051541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.052001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.052011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.052549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.052578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.053019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.053028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.053455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.053485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.053943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.053953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.054495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.084 [2024-07-25 15:25:02.054524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.084 qpair failed and we were unable to recover it. 00:29:10.084 [2024-07-25 15:25:02.055000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.055010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.055558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.055587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.056071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.056081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.056673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.056701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.056933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.056942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.057497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.057527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.057994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.058004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.058557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.058586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.059066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.059077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.059542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.059572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.060033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.060044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.060588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.060617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.061096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.061105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.061732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.061760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.062198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.062214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.062734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.062763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.063415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.063444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.063840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.063849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.064402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.064432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.064904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.064916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.065513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.065542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.066003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.066012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.066468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.066497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.085 qpair failed and we were unable to recover it. 00:29:10.085 [2024-07-25 15:25:02.067009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.085 [2024-07-25 15:25:02.067018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.067605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.067634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.068100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.068110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.068455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.068464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.068799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.068807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.069174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.069183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.069522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.069531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.069875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.069884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.070349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.070356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.070841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.070849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.071289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.071298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.071630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.071639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.071957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.071966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.072420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.072429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.072882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.072889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.073425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.073433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.073811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.073819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.074300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.074308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.074778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.074786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.075249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.075257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.075705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.075713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.076191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.076204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.076669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.076676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.077119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.077128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.077595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.077603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.077935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.077943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.078164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.078178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.078691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.078699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.079143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.079150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.079681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.079710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.080171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.080181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.080727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.080756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.081436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.081465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.081933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.081943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.082410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.086 [2024-07-25 15:25:02.082439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.086 qpair failed and we were unable to recover it. 00:29:10.086 [2024-07-25 15:25:02.082983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.082993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.083421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.083454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.083799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.083809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.084263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.084272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.084797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.084805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.085267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.085275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.085596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.085605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.085824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.085836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.086293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.086301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.086740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.086749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.087223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.087232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.087708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.087716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.088248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.088256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.088485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.088496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.088997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.089006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.089482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.089491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.090023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.090031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.090424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.090432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.090677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.090684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.091206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.091216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.091666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.091673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.092160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.092168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.092564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.092573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.093050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.093058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.093604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.093633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.093975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.093985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.094556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.094585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.095063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.095073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.087 [2024-07-25 15:25:02.095520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.087 [2024-07-25 15:25:02.095549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.087 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.096015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.096025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.096498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.096526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.096966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.096976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.097518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.097547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.098012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.098022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.098601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.098630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.099094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.099104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.099522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.099551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.100014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.100024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.100573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.100602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.100932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.100943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.101430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.101458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.101920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.101933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.102420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.102449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.102908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.102918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.103370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.103378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.103697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.103706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.104194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.104205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.104644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.104652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.105103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.105111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.105557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.105567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.105909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.105917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.106511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.106540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.106879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.106890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.107360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.107369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.107843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.107851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.108301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.108309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.108761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.108769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.109109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.109117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.109551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.109559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.110053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.110061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.110605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.110635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.111101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.111112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.111466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.111476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.111926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.088 [2024-07-25 15:25:02.111934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.088 qpair failed and we were unable to recover it. 00:29:10.088 [2024-07-25 15:25:02.112473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.112502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.112961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.112970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.113555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.113584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.114083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.114094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.114572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.114580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.115029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.115039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.115606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.115635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.116110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.116119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.116506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.116535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.116992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.117002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.117575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.117605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.117964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.117974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.118529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.118557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.119018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.119028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.119602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.119631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.120097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.120106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.120570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.120579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.121026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.121037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.121602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.121631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.122096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.122105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.122657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.122687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.123145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.123155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.123644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.123652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.124102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.124111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.124650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.124679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.125141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.125151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.125598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.125607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.126066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.126075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.126609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.126638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.089 [2024-07-25 15:25:02.127089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.089 [2024-07-25 15:25:02.127098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.089 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.127644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.127673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.128130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.128140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.128589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.128598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.129038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.129046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.129611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.129640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.130104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.130114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.130671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.130700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.131160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.131171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.131581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.131611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.132074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.132083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.132622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.132652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.133115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.133125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.133727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.133756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.134224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.134243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.134749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.134758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.135208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.135216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.135629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.135637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.136090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.136098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.136637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.136645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.137020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.137028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.137594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.137623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.138085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.138094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.138658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.138686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.139147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.139157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.139643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.139652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.140116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.140124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-07-25 15:25:02.140661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-07-25 15:25:02.140690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.141152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.141165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.141711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.141740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.142196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.142212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.142756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.142785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.143413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.143442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.143917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.143927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.144480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.144508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.144971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.144981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.145529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.145557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.146032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.146042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.146596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.146625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.147094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.147104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.147634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.147664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.148139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.148149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.148690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.148700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.149146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.149154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.149772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.149802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.150369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.150398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.150856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.150867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.151495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.151523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.152007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.152017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.152577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.152607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.153066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.153077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.153622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.153651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.154115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.154124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.154657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.154686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.155147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.155157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.155698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.155730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.156192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.156207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.156763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.156792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.157365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.157394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.157852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.157862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-07-25 15:25:02.158450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-25 15:25:02.158478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.158944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.158953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.159507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.159536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.160000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.160011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.160557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.160586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.161058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.161067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.161617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.161646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.162105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.162115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.162656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.162684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.163165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.163175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.163718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.163747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.164209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.164218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.164763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.164792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.165397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.165426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.165882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.165892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.166456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.166485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.166945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.166955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.167493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.167522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.167976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.167985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.168530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.168559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.169024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.169035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.169612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.169642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.170147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.170157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.170706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.170736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.171196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.171211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.171792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.171820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.172173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.172183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-07-25 15:25:02.172714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-25 15:25:02.172742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.173190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.173206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.173764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.173793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.174413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.174442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.174794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.174804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.175405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.175434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.175917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.175927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.176467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.176496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.176963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.176976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.177432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.177461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.177944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.177955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.178550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.178579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.179039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.179049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.179598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.179627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.180099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.180109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.180581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.180590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.181030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.181038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.181586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.181614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.182090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.182099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.182580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.182589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.183044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.183053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.183600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.183630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.184106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.184115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.184666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.184695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.185158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.185168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.185717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.185745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.186420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.186449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.186910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.186920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.187465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.187494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.187955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.187965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.188539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.188568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.189023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.189034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.189596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.189625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.190129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.190139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.190680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.190708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-07-25 15:25:02.191177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-07-25 15:25:02.191186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.191764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.191793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.192394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.192423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.192898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.192908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.193457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.193485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.193947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.193957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.194489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.194518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.194995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.195004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.195545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.195575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.195794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.195807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.196259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.196268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.196764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.196772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.197221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.197229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.197682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.197694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.198144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.198153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.198592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.198601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.198713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.198723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.199176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.199185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.199628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.199637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.199993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.200002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.200454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.200461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.200674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.200684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.201187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.201195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.201685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.201693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.202143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.202152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.202366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.202377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.202841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.202850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.203324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.203332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.203783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.203791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.204009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.204020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.204477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.204486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.204956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.204965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.205506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.205534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.205997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.206007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.206556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-07-25 15:25:02.206586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-07-25 15:25:02.207060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.207070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.207531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.207560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.208020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.208030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.208569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.208598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.209071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.209081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.209567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.209576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.210120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.210129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.210654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.210662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.211130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.211139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.211677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.211706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.212166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.212176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.212713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.212742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.213224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.213243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.213714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.213723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.214178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.214186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.214637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.214646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.215120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.215128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.215577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.215606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.216074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.216088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.216566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.216574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.217050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.217058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.217539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.217568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.218047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.218058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.218608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.218638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.219118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.219127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.219677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.219706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.220163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.220173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.220710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.220739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.221213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.221224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.221681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.221689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.222139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.222147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.222503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.222511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.222983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-07-25 15:25:02.222990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-07-25 15:25:02.223533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.223562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.224025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.224035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.224584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.224613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.225093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.225103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.225576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.225585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.226040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.226047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.226590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.226619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.227093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.227103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.227725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.227755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.228224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.228243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.228706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.228714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.229097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.229105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.229587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.229596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.230057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.230065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.230609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.230638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.231111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.231121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.231586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.231595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.232088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.232097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.232540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.232549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.233026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.233034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.233583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.233613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.234072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.234082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.234619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.234648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.235126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.235136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.235582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.235611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.236070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.236083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.236539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.236548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.237029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.237037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.237588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.237618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.237976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.237986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.238398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-07-25 15:25:02.238427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-07-25 15:25:02.238656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.238669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.239146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.239154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.239615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.239624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.240082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.240091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.240431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.240440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.240890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.240898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.241347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.241355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.241574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.241585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.242055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.242063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.242515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.242523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.242843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.242852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.243303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.243311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.243658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.243666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.244159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.244166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.244620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.244628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.245076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.245084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.245648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.245676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.246021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.246031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.246585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.246613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.247079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.247089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.247528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.247537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.248021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.248029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.248438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.248467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.248920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.248929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.249500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.249529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.249988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.249998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.250544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.250573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.251036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.251046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.251579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.251608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.252071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.252082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.252619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.252649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.253111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.253120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.253660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.253690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.254153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.254162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.254700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.254732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-07-25 15:25:02.255187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-07-25 15:25:02.255197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.255651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.255680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.256143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.256153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.256694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.256723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.257182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.257192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.257757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.257787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.258400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.258430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.258893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.258902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.259453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.259482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.259957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.259967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-07-25 15:25:02.260501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-07-25 15:25:02.260530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.371 [2024-07-25 15:25:02.260988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-07-25 15:25:02.261000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-07-25 15:25:02.261548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-07-25 15:25:02.261578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-07-25 15:25:02.262058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-07-25 15:25:02.262067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-07-25 15:25:02.262653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-07-25 15:25:02.262683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-07-25 15:25:02.263043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-07-25 15:25:02.263054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.371 qpair failed and we were unable to recover it. 00:29:10.371 [2024-07-25 15:25:02.263603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.371 [2024-07-25 15:25:02.263631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.263979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.263989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.264536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.264564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.264896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.264907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.265459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.265488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.265960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.265970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.266517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.266546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.267005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.267015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.267549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.267577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.268055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.268065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.268510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.268538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.268889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.268898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.269452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.269481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.269837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.269848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.270306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.270315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.270767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.270775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.271232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.271241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.271699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.271707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.272141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.272149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.272504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.272513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.272874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.272883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.273388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.273395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.273606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.273618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.274073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.274084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.274566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.274574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.275040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.275048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.275497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.275505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.276024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.276032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.276245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.276257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.276677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.276685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.277135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.277143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.277690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.277719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.278227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.278246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.278723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.278733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.279185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.279193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.372 [2024-07-25 15:25:02.279632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.372 [2024-07-25 15:25:02.279640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.372 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.280165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.280174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.280713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.280742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.281198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.281213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.281737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.281765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.282370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.282399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.282914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.282924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.283472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.283501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.283967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.283977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.284453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.284482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.284839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.284849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.285389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.285418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.285880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.285890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.286335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.286344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.286859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.286867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.287303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.287311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.287748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.287756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.288211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.288220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.288570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.288577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.289017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.289025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.289474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.289482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.289933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.289941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.290412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.290420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.290870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.290879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.291097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.291109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.291544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.291552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.292022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.292031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.292483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.292492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.292959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.292971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.293544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.293572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.294048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.294057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.294680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.294709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.295159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.295169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.295710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.295739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.296212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.296223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.296678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.296686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.373 [2024-07-25 15:25:02.297142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.373 [2024-07-25 15:25:02.297150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.373 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.297598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.297607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.298080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.298089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.298562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.298571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.298961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.298969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.299501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.299529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.299967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.299977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.300527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.300556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.301014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.301025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.301573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.301601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.302073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.302082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.302616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.302645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.303106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.303115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.303682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.303711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.304160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.304170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.304712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.304740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.305096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.305107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.305527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.305535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.306049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.306057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.306612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.306641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.307102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.307112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.307659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.307688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.308163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.308173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.308628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.308657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.309126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.309136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.309581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.309589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.310061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.310071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.310513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.310542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.310902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.310912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.311499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.311528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.312008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.312019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.312572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.312600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.312946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.312959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.313492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.313520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.314000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.314010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.314563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.374 [2024-07-25 15:25:02.314592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.374 qpair failed and we were unable to recover it. 00:29:10.374 [2024-07-25 15:25:02.315059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.315070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.315635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.315664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.316151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.316161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.316700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.316728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.317185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.317195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.317651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.317680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.318161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.318171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.318599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.318628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.319093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.319103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.319611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.319621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.320102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.320111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.320562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.320571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.321031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.321039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.321618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.321648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.322106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.322115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.322653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.322682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.323040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.323051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.323618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.323647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.324127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.324137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.324682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.324711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.325177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.325187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.325630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.325658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.326000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.326010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.326620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.326649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.327114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.327124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.327595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.327604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.328052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.328061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.328589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.328618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.329088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.329098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.329645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.329674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.330168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.330178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.330735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.330764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.331409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.331438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.331936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.331946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.332485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.332515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.375 [2024-07-25 15:25:02.332875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.375 [2024-07-25 15:25:02.332885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.375 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.333438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.333470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.333932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.333941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.334480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.334509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.334962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.334972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.335516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.335545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.336011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.336020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.336592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.336620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.337131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.337142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.337592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.337601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.338085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.338093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.338576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.338585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.339045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.339054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.339602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.339631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.340091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.340101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.340662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.340691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.341159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.341169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.341708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.341737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.342189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.342199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.342735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.342764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.343360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.343389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.343856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.343866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.344457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.344485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.344752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.344761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.345217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.345226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.345682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.345690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.346177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.346185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.346646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.346655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.347104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.347112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.376 [2024-07-25 15:25:02.347413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.376 [2024-07-25 15:25:02.347421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.376 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.347859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.347866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.348339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.348348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.348804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.348811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.349261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.349269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.349501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.349514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.350009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.350018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.350467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.350475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.350833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.350842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.351288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.351296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.351822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.351830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.352279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.352288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.352504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.352517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.352979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.352987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.353465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.353473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.353924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.353931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.354376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.354384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.354825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.354833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.355303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.355312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.355761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.355769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.356224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.356233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.356573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.356581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.356926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.356935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.357383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.357391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.357843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.357851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.358298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.358306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.358775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.358783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.359177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.359185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.359642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.359650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.360098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.360106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.360473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.360482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.360936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.360944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.361477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.361507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.361966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.361976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.362556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.362584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.363048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.363058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.363599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.377 [2024-07-25 15:25:02.363628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.377 qpair failed and we were unable to recover it. 00:29:10.377 [2024-07-25 15:25:02.364091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.364101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.364563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.364573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.365030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.365039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.365581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.365609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.366069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.366079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.366648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.366678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.367141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.367150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.367695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.367723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.368185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.368195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.368761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.368791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.369359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.369388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.369839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.369849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.370397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.370426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.370900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.370911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.371455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.371484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.371945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.371958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.372510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.372538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.373012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.373021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.373571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.373600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.374060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.374070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.374597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.374627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.375101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.375112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.375613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.375642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.376106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.376116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.376474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.376482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.376941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.376949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.377488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.377517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.377975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.377984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.378522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.378550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.379029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.379039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.379629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.379658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.380122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.380131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.380507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.380536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.380984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.380994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.381563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.378 [2024-07-25 15:25:02.381592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.378 qpair failed and we were unable to recover it. 00:29:10.378 [2024-07-25 15:25:02.382056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.382066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.382613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.382642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.383118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.383128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.383670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.383699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.384160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.384170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.384701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.384730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.385178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.385188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.385522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.385550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.386100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.386110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.386664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.386673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.387176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.387185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.387733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.387762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.388229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.388250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.388783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.388792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.389397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.389426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.389887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.389896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.390352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.390361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.390820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.390828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.391323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.391331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.391788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.391796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.392247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.392259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.392720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.392727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.393198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.393209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.393673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.393681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.394127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.394136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.394669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.394678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.395146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.395154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.395694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.395724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.396182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.396193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.396753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.396782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.397415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.397444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.397980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.397990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.398522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.398551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.399056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.399066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.399606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.399635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.399981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.379 [2024-07-25 15:25:02.399991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.379 qpair failed and we were unable to recover it. 00:29:10.379 [2024-07-25 15:25:02.400557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.400586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.401049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.401058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.401610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.401639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.401972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.401983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.402489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.402518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.402981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.402991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.403574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.403602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.404063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.404073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.404626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.404655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.405160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.405170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.405712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.405741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.406208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.406223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.406537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.406565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.407027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.407037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.407598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.407627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.408098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.408108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.408655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.408684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.409148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.409158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.409523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.409551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.410024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.410033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.410253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.410265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.410700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.410710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.411142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.411151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.411673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.411703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.412171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.412180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.412763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.412792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.413420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.413449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.413909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.413918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.414513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.414542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.415003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.415014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.415559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.415588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.416050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.416060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.416512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.416540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.417114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.417123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.417671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.417699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.380 [2024-07-25 15:25:02.418054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.380 [2024-07-25 15:25:02.418063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.380 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.418502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.418531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.418997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.419006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.419587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.419616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.420087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.420097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.420561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.420570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.421029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.421037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.421591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.421620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.422060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.422070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.422553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.422581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.423064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.423075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.423512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.423541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.424006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.424016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.424565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.424594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.425057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.425067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.425618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.425647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.426115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.426129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.426661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.426690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.427152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.427163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.427711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.427740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.428208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.428219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.428748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.428777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.429403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.429432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.429911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.429921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.430477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.430506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.430968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.430978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.431588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.431618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.432094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.432104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.432473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.432483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.432936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.432943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.433502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.433531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.434004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.381 [2024-07-25 15:25:02.434014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.381 qpair failed and we were unable to recover it. 00:29:10.381 [2024-07-25 15:25:02.434572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.434601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.435066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.435075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.435627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.435656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.436119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.436129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.436664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.436693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.437163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.437172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.437716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.437746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.438213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.438224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.438716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.438725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.439182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.439190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.439648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.439656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.440139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.440147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.440523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.440552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.441020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.441030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.441582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.441611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.442093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.442104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.442576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.442584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.443078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.443086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.443530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.443538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.444015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.444023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.444484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.444514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.444744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.444757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.445273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.445282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.445770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.445778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.446001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.446016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.446377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.446386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.446841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.446849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.447337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.447345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.447795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.447803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.448125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.448134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.448606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.448615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.449089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.449097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.449218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.449229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.449686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.449695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.450145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.450153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-07-25 15:25:02.450696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-07-25 15:25:02.450704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.451147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.451155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.451614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.451642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.452001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.452010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.452482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.452511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.452801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.452811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.453273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.453282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.453638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.453646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.454005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.454013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.454390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.454399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.454847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.454855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.455194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.455206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.455689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.455697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.456155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.456164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.456529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.456558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.457020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.457030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.457581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.457611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.458120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.458130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.458596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.458605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.459060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.459068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.459563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.459592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.459978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.459988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.460556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.460585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.461047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.461057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.461494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.461522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.461986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.461996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.462492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.462522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.462985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.462995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.463458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.463487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.464028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.464041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.464514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.464543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.465004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.465014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.465589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.465618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.466062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.466072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.466615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.466644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.467104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-07-25 15:25:02.467114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-07-25 15:25:02.467742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.467772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.468409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.468438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.468840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.468849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.469398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.469426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.469910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.469920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.470422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.470451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.470914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.470924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.471498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.471527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.472008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.472017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.472550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.472579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.473043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.473053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.473633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.473664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.474024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.474034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.474565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.474594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.475059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.475068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.475621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.475650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.476096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.476106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.476658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.476687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.477151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.477160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.477721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.477750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.478116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.478127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.478578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.478587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.479086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.479093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.479575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.479584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.479942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.479950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.480543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.480572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.481016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.481026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.481568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.481596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.482080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.482091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.482547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.482555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.483012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.483021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.483504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.483533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.484055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.484064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.484616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.484648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.485126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.485136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.485688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-07-25 15:25:02.485716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-07-25 15:25:02.486192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.486214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.486769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.486798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.487420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.487449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.487900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.487910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.488453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.488482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.488947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.488957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.489427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.489456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.489968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.489978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.490509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.490538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.490997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.491006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.491557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.491586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.492048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.492058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.492622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.492652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.493116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.493126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.493647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.493676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.494137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.494147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.494619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.494628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.495082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.495090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.495644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.495673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.496136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.496146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.496623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.496631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.496943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.496953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.497513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.497542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.497995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.498004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.498480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.498509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.498977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.498987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.499431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.499460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.499915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.499925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.500415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.500444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.500794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.500805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.501275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.501283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.501765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.501773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.502233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.502242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.502677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.502685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.503131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.503138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-07-25 15:25:02.503526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-07-25 15:25:02.503534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.503992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.504000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.504463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.504475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.504921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.504929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.505431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.505461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.505988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.505998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.506549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.506577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.507026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.507036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.507617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.507646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.508111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.508121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.508470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.508498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.508968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.508978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.509530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.509559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.509909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.509919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.510140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.510153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.510620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.510629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.510951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.510961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.511428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.511436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.511940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.511948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.512515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.512543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.513050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.513060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.513623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.513652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.514117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.514128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.514661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.514690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.515144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.515155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.515530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.515538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.515991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.516000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.516460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.516468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.516921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.516929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.517476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.517505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.518015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-07-25 15:25:02.518025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-07-25 15:25:02.518595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.518624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.519098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.519107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.519471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.519481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.519933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.519941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.520563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.520591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.521061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.521071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.521653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.521682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.522158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.522167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.522717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.522746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.523212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.523224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.523683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.523691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.523917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.523932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.524384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.524393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.524837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.524845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.525287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.525297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.525652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.525660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.526113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.526120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.526558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.526566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.526932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.526941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.527474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.527481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.527925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.527933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.528496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.528525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.529000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.529011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.529567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.529596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.530060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.530070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.530631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.530660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.531176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.531186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.531769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.531799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.532423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.532452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.532914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.532923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.533514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.533542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.534009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.534019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.534576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.534605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.535069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.535079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.535658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.535687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-07-25 15:25:02.536164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-07-25 15:25:02.536174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.536708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.536737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.537223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.537241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.537797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.537806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.538418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.538447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.538899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.538909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.539424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.539454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.539907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.539917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.540366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.540375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.540827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.540835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.541287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.541296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.541805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.541814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.542274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.542282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.542744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.542752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.543208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.543219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.543685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.543694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.544156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.544169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.544717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.544746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.545209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.545220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.545670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.545698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.546165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.546175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.546736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.546764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.547112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.547123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.547676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.547706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.548173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.548184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.548653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.548683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.549129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.549140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-07-25 15:25:02.549594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-07-25 15:25:02.549604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.550052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.550063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.550542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.550572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.551050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.551060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.551643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.551672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.552016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.552027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.552513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.552542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.552996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.553006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.553519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.553548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.554003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.554013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.554527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.554556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.555007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.555018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.555502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.555531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.556002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.556011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.556565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.556595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.557099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.557110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.557678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.557707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.558180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.558190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.558749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.558778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.559115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.559126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.559578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.559586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.560039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.560049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.560629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.560658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.561076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.561086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.561569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.561597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.562065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-07-25 15:25:02.562075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-07-25 15:25:02.562538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.562567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.562919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.562929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.563516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.563545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.564014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.564027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.564498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.564526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.564993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.565003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.565501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.565530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.565995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.566005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.566458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.566487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.566811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.566822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.567282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.567291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.567745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.567753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.568159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.568167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.568637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.568645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.569132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.569140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.569495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.569503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.569940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.569949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.570490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.570519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.570996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.571006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.571486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.571515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.571816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.571827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.572174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.572182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.572629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.572638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.573095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.573103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.573612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.573620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.574101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.574109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.574637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.574647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.575102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.575110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.575653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.575682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.576139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.576150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.576631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.576641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.577119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.577128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.577570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.577600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.578058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.578068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.578619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-07-25 15:25:02.578648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-07-25 15:25:02.579006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.579016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.579565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.579595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.579844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.579855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.579951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.579961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.580409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.580418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.580896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.580904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.581007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.581015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.581326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.581334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.581778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.581789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.582188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.582197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.582554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.582563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.583043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.583053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.583480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.583488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.583943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.583951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.584479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.584507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.584972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.584982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.585548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.585577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.586042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.586053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.586617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.586646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.587148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.587158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.587693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.587723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.588082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.588092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.588588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.588598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.588919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.588928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.589522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.589550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.590017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.590026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.590401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.590430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.590913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.590922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.591158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.591167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.591611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.591620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.591973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.591983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.592541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.592570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.593050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.593059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.593493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.593524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-07-25 15:25:02.593989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-07-25 15:25:02.593998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.594537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.594566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.595039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.595049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.595601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.595630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.595965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.595976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.596431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.596460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.596949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.596959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.597491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.597520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.597972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.597983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.598553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.598581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.599030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.599040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.599605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.599634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.600093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.600104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.600569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.600578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.601047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.601060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.601504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.601534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.601883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.601893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.602480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.602509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.602990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.603000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.603545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.603575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.604039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.604050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.604589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.604618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.605094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.605104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.605492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.605501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.605976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.605985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.606440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.606469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.606938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.606948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.607498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.607534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.607997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.608007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.608565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.608595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.608948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.608959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.609512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.609544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.609908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.609920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.610256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.610265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.610744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.610752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.611250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.611258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-07-25 15:25:02.611726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-07-25 15:25:02.611734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.612211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.612220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.612689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.612697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.613157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.613165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.613624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.613632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.614064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.614073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.614613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.614642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.615103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.615113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.615321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.615333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.615805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.615814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.616298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.616307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.616521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.616531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.616960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.616969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.617412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.617420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.617894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.617902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.618114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.618123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.618595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.618604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.619071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.619079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.619523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.619536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.619944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.619953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.620492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.620520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.620974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.620984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.621599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.621628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.622096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.622106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.622593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.622602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.623070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.623078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.623640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.623669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.624138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.624149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.624713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.624742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.625193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.625210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.625795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.625824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.626398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.626427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.626888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.626898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.627437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.627466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.627912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.627925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.628467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-07-25 15:25:02.628496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-07-25 15:25:02.628954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.628964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.629600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.629629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.630107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.630117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.630586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.630595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.631042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.631050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.631602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.631631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.632104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.632114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.632649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.632678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.633129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.633139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.633595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.633604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.634082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.634091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.634406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.634415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.634889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.634898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.635458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.635487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.635968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.635977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.636528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.636557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.637018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.637029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.637610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.637639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.638110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.638120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.638581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.638589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.638824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.638831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.639295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.639303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.639791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.639810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.640263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.640272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.640530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.640538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.641014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.641022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.641498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.641507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.641965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.641973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-07-25 15:25:02.642570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-07-25 15:25:02.642599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.643061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.643071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.643626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.643655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.644115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.644125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.644692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.644722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.645168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.645178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.645734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.645763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.646094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.646104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.646565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.646573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.647031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.647039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.647638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.647668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.648157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.648167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.648726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.648754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.649224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.649242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.649727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.649735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.650191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.650199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.650656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.650664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.651117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.651124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.651660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.651689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.652165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.652176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.652729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.652758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.653228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.653250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.653596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.653604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.654056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.654064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.654525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.654533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.654986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.654994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.655473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.655502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.655953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.655963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.656525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.656554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.657034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.657044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.657618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.657647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.658107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.658116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.658605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.658635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.659121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.659132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.659511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.659520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.659967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-07-25 15:25:02.659975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-07-25 15:25:02.660192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.660204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.660563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.660571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.661051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.661059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.661668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.661697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.662160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.662170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.662717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.662746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.663423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.663452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.663903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.663912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.664504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.664533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.665060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.665070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.665433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.665461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.665925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.665934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.666527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.666556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.667020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.667030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.667569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.667598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.668059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.668069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.668512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.668541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.669002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.669012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.669560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.669589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.670055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.670066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.670662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.670691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.671209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.671219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.671754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.671783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.672424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.672452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.672931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.672941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.673543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.673576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.673800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.673811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.674297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.674306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.674876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.674884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.675237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.675245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.675721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.675729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.676194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.676206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.676665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.676673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.677127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.677135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-07-25 15:25:02.677594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-07-25 15:25:02.677603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.677966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.677975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.678518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.678546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.679012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.679022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.679719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.679748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.680416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.680445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.680794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.680805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.681427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.681456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.681919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.681929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.682515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.682544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.682904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.682913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.683134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.683146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.683508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.683517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.683879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.683887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.684334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.684343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.684575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.684585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.685074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.685082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.685531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.685539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.686003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.686011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.686568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.686597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.687071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.687082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.687539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.687569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.687930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.687941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.688502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.688531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.689007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.689016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.689478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.689507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.689924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.689934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.690541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.690570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.690855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.690865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.691315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.691323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.691782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.691790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.692260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.692271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.692734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.692742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.693326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.693335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.693707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.693714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.694029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-07-25 15:25:02.694038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-07-25 15:25:02.694393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.694402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.694860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.694869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.695371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.695385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.695863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.695869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.696338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.696345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.696808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.696815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.697267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.697275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.697659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.697666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.698122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.698129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.698639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.698646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.698990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.698996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.699430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.699438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.699880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.699886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.700445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.700472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.700931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.700939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.701497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.701525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.701974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.701982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.702609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.702637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.703163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.703171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.703662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.703670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.704111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.704118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.704497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.704526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.704890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.704898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.705340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.705348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.705829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.705836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.706284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.706291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.706758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.706764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.707209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.707216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.707693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.707700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.708130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.708137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.708631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.708638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.708976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.708983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.709454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.709461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.709936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.709942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.710487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.710514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-07-25 15:25:02.710983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-07-25 15:25:02.710995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.711577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.711604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.712053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.712061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.712604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.712632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.713084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.713092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.713559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.713566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.714007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.714014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.714415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.714442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.714944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.714953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.715527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.715555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.716019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.716028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.716580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.716608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.717066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.717074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.717625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.717653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.718101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.718109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.718675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.718702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.719161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.719170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.719700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.719727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.720174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.720182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.720718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.720746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.721198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.721213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.721796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.721823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.722416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.722444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.722881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.722890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.723432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.723460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.723906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.723915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.724453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.724480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.724928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.724937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.725482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.725509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.725985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.725993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.726553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-07-25 15:25:02.726581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-07-25 15:25:02.727048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.727057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.727612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.727640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.728163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.728172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.728745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.728772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.729417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.729445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.729887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.729896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.730408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.730436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.730883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.730891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.731430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.731457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.731911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.731923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.732138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.732149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.732477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.732486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.732916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.732923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.733357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.733364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.733822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.733830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.734045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.734056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.734398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.734406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.734894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.734902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.735121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.735130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.735576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.735583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.735936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.735943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.736428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.736435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.736655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.736664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.737137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.737144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.737601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.737607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.738045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.738051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.738589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.738616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.739120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.739129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.739644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.739651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.740083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.740090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.740610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.740617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.741059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.741066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.741517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.741545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.742000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.742009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-07-25 15:25:02.742610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-07-25 15:25:02.742638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.742859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.742871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.743421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.743449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.743900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.743909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.744388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.744395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.744825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.744832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.744953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.744964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Write completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Write completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Write completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Write completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Write completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Write completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Write completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Write completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Write completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 Read completed with error (sct=0, sc=8) 00:29:10.694 starting I/O failed 00:29:10.694 [2024-07-25 15:25:02.745700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.694 [2024-07-25 15:25:02.746397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.746485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.747055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.747089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.747687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.747776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.748462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.748549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.749013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.749048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.749648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.749737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.750473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.750560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.751170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.751220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.751721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.751750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.752413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.752500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.753075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.753110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.753490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.753520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.753999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.754029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.754682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.754769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.755420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.755508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.755963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.755998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.756538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.756570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.757057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-07-25 15:25:02.757085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-07-25 15:25:02.757572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.757601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.758100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.758128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.758595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.758624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.759118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.759145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.759512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.759541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.760031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.760058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.760541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.760569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.761043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.761071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.761581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.761609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.762112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.762140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.762615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.762650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.763093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.763120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.763593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.763622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.764120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.764147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.764680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.764709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.765261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.765303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.765783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.765811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.766282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.766310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.766717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.766755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.767242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.767272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.767766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.767793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.768307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.768335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.768805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.768833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.769306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.769334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.769823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.769851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.770326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.770354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.770851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.770878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.771375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.771403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.771897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.771926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.772379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.772408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.772909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.772937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.773410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.773440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.774022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.774050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.774528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.774557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.775049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.775076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.775570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-07-25 15:25:02.775601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-07-25 15:25:02.776102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.776130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.776648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.776676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.777147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.777175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.777742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.777770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.778403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.778491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.779042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.779079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.779584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.779615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.780012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.780040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.780541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.780570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.781064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.781091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.781587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.781616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.782116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.782144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.782634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.782663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.783169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.783198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.783692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.783730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.784444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.784531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.785076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.785111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.785674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.785706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.786212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.786244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.786667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.786695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.787182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.787216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.787725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.787752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.788452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.788542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.789164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.789219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.789635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.789665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.790077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.790110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.790677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.790707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.791189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.791226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.791761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.791789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.792445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.792533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.793104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.793140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.793643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.793731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.794404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.794491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.795086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.795123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.795386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.795417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-07-25 15:25:02.795708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-07-25 15:25:02.795736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 435762 Killed "${NVMF_APP[@]}" "$@" 00:29:10.697 [2024-07-25 15:25:02.796223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.796253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.796758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.796786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:10.697 [2024-07-25 15:25:02.797185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.797224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:10.697 [2024-07-25 15:25:02.797732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.797762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.697 [2024-07-25 15:25:02.798264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.798293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.798784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.798812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.799302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.799331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.799835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.799863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.800373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.800405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.800653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.800680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.801159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.801187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.801691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.801719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.802218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.802248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.802759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.802788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.803447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.803535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.803999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.804035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.804541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.804572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-07-25 15:25:02.805078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.805107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=436620 00:29:10.697 [2024-07-25 15:25:02.805669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.805699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 436620 00:29:10.697 [2024-07-25 15:25:02.806104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.806131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 436620 ']' 00:29:10.697 [2024-07-25 15:25:02.806542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.806570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.697 [2024-07-25 15:25:02.807096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.807123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.697 [2024-07-25 15:25:02.807484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-07-25 15:25:02.807513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 15:25:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.697 [2024-07-25 15:25:02.807802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.807831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.808343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.808378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.808797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.808824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.809377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.809405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.809950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.809979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.810511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.810541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.811066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.811094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.811457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.811487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.811966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.811995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.812510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.812539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.813028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.813056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.813576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.813605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.813992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.814025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.814416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.814446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.814983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.815011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.815398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.815428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.815956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.815983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.816474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.816503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.817062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.817089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.817609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.817639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.818011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.818038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.818515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.818543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.818918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.818944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.819303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.819331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.819687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.819715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.820078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.820105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.820588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.820616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.820953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.820980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.821473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.821501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.821898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.821937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.822471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.822501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.822910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.822943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.823439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.823468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.823736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.823767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.824259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.824289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-07-25 15:25:02.824810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-07-25 15:25:02.824839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.825352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.825381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.825893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.825920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.826390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.826418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.826938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.826966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.827458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.827485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.827976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.828011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.828392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.828424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.828819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.828847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.829342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.829370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.829875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.829903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.830401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.830430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.830941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.830969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.831368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.831396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.831905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.831932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.832199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.832244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.832764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.832791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.833292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.833321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.833837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.833864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.834376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.834404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.834892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.834920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.835425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.835453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.835844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.835872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.836446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.836474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.836977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.837004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.837534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.837563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.837959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.837986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.838571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.838660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.838902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.838939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.839463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.839495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.840007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.840036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.840302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.840332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.840850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.840879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.841275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.841306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.841820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.841847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.842352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.842381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-07-25 15:25:02.842763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-07-25 15:25:02.842790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.843186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.843223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.843631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.843660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.844017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.844047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.844555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.844584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.845099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.845127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.845665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.845694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.846060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.846087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.846606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.846635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.847137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.847164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.847462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.847502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.848019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.848047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.848450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.848484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.848885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.848913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.849416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.849445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.849885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.849911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.850426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.850454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.850954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.850981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.851380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.851408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.851919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.851946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.852449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.852477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.852985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.853012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.853709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.853799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.854278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.854331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.854848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.854877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.855389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.855418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.855785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.855812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.856311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.856339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.856846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.856873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.857407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.857436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.857933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.857960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.858494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.858521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.858949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:10.700 [2024-07-25 15:25:02.859000] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.700 [2024-07-25 15:25:02.859015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.859042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.859557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-07-25 15:25:02.859586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-07-25 15:25:02.860091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-07-25 15:25:02.860117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-07-25 15:25:02.860526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-07-25 15:25:02.860556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-07-25 15:25:02.860969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-07-25 15:25:02.860998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-07-25 15:25:02.861518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-07-25 15:25:02.861546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-07-25 15:25:02.861949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-07-25 15:25:02.861977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-07-25 15:25:02.862468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-07-25 15:25:02.862558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-07-25 15:25:02.863183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-07-25 15:25:02.863233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.971 [2024-07-25 15:25:02.863752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-07-25 15:25:02.863782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-07-25 15:25:02.864455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-07-25 15:25:02.864543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc98000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-07-25 15:25:02.865073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-07-25 15:25:02.865098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-07-25 15:25:02.865683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-07-25 15:25:02.865693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-07-25 15:25:02.866157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-07-25 15:25:02.866164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-07-25 15:25:02.866540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-07-25 15:25:02.866547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-07-25 15:25:02.867025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-07-25 15:25:02.867031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-07-25 15:25:02.867502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.971 [2024-07-25 15:25:02.867509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.971 qpair failed and we were unable to recover it. 00:29:10.971 [2024-07-25 15:25:02.867952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.867959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.868400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.868407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.868899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.868906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.869222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.869229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.869729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.869736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.870179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.870186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.870700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.870707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.871166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.871173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.871772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.871801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.872174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.872183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.872826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.872854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.873445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.873473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.873976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.873985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.874542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.874574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.874810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.874819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.875313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.875321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.875687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.875695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.876155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.876163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.876635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.876642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.877090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.877097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.877558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.877565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.878050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.878057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.878603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.878633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.879098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.879107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.879336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.879344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.879836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.879842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.880290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.880297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.880760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.972 [2024-07-25 15:25:02.880767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.972 qpair failed and we were unable to recover it. 00:29:10.972 [2024-07-25 15:25:02.881088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.881095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.881620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.881627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.882110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.882116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.882577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.882584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.883026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.883033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.883461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.883490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.883944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.883953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.884511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.884539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.884768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.884777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.885282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.885290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.885421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.885428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.885874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.885880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.886331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.886339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.886590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.886597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.887055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.887062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.887515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.887522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.888021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.888028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.888554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.888561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.889012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.889019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.889558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.889587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.890055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.890064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.890444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.890473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.890932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.890941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.973 [2024-07-25 15:25:02.891476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.891506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.891995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.892004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.892549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.892581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.892950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.892958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.893507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.893535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.894004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.894014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.894687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.973 [2024-07-25 15:25:02.894715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.973 qpair failed and we were unable to recover it. 00:29:10.973 [2024-07-25 15:25:02.895421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.895450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.895912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.895922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.896437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.896466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.896798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.896806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.897266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.897273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.897717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.897724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.898169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.898175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.898602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.898609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.899052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.899059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.899481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.899510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.899995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.900004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.900528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.900556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.900781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.900793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.901300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.901308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.901762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.901769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.902215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.902222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.902693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.902708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.903237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.903246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.903632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.903638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.904129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.904136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.904367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.904378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.904698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.904706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.905214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.905222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.905698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.905705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.906165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.906173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.906633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.906641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.907090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.907097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.907649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.907657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.908111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.908119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.908489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.908496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.908943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.908950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.909436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.909443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.974 [2024-07-25 15:25:02.909764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.974 [2024-07-25 15:25:02.909772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.974 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.910132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.910140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.910482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.910489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.910840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.910849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.911307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.911314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.911655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.911663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.912157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.912165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.912526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.912534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.912991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.912999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.913461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.913469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.913952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.913960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.914506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.914534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.914993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.915002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.915562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.915590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.916055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.916064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.916622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.916651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.917172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.917182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.917748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.917776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.918421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.918449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.918943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.918953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.919515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.919543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.920005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.920013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.920603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.920632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.921165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.921173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.921755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.921783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.922416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.922444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.922905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.922914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.923417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.923445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.923905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.923913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.924457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.924485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.924815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.924824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.925277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.925284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.925757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.925764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.926208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.926216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.926662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.926669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.927117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.975 [2024-07-25 15:25:02.927123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.975 qpair failed and we were unable to recover it. 00:29:10.975 [2024-07-25 15:25:02.927488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.927494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.927942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.927949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.928420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.928427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.928868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.928875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.929377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.929385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.929844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.929851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.930312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.930319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.930772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.930780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.931223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.931230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.931571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.931577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.932030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.932037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.932484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.932492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.932940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.932946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.933391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.933398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.933833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.933839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.934401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.934428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.934887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.934896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.935115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.935122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.935337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.935344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.935677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.935683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.936132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.936138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.936457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.936465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.936836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.936843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.937292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.937298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.937740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.937746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.938188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.938194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.938635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.938642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.939083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.939091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.939577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.939584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.940046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.940053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.940607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.940635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.941097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.941106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.941584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.941592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.941907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.941913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.976 qpair failed and we were unable to recover it. 00:29:10.976 [2024-07-25 15:25:02.942448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.976 [2024-07-25 15:25:02.942476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.942994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.943002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.943542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.943569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.944031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.944040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.944579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.944607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.945082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.945091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.945605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.945613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.946051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.946058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.946299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.977 [2024-07-25 15:25:02.946442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.946467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.946949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.946958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.947117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.947125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.947595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.947602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.948090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.948097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.948564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.948571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.949012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.949019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.949453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.949480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.949951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.949960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.950506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.950533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.951027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.951036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.951498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.951525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.951879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.951888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.952434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.952461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.952986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.952994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.953539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.953567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.954115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.954125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.954573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.954580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.954989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.955000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.955589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.955616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.956081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.956090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.956579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-07-25 15:25:02.956586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-07-25 15:25:02.957073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.957079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.957618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.957646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.958108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.958118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.958538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.958566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.958936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.958946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.959469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.959496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.959897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.959906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.960422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.960449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.960822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.960831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.961287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.961294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.961795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.961802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.962259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.962267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.962750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.962756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.963199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.963209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.963678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.963685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.964183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.964190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.964724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.964751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.965217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.965227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.965739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.965746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.966184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.966191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.966627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.966634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.967086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.967093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.967654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.967682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.968139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.968148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.968621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.968648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.969184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.969193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.969704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.969731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.970187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.970196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.970649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.970676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.971127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.971135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.971749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.971777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.972414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.972442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.972815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.972823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.973418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.973446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-07-25 15:25:02.973811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-07-25 15:25:02.973821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.974313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.974322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.974658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.974669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.975148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.975154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.975606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.975613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.975824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.975837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.976288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.976295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.976736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.976742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.977221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.977228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.977671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.977678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.978156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.978163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.978623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.978631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.979065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.979073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.979401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.979409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.979620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.979629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.980084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.980091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.980302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-07-25 15:25:02.980312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-07-25 15:25:02.980753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.980760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.981113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.981120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.981520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.981527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.981998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.982005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.982449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.982456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.982932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.982939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.983300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.983308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.983776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.983783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.984225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.984233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.984762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.984771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.985122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.985130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.985581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.985588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.986027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.986034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.986486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.986493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.986969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.986975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.987508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.987536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.987935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.987944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.988517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.988544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.988992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.989000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.989490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.989518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.989971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.989980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.990555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.990582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.991033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.991041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.991583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.991611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.991849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.991858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.992268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.992279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.992776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.992782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.993225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.993232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.993667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.993674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.994062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.994068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.994576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.994583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.994953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.994960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.995500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.995528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.995825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.995833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.996246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.996254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-07-25 15:25:02.996734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-07-25 15:25:02.996741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:02.997678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:02.997695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:02.998049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:02.998056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:02.998619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:02.998646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:02.998996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:02.999005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:02.999561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:02.999589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.000062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.000071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.000475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.000503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.000951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.000960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.001496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.001523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.002010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.002019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.002469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.002497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.002943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.002951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.003484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.003513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.004001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.004011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.004574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.004602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.005058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.005067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.005599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.005626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.006103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.006111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.006525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.006553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.007000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.007009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.007559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.007587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.008039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.008048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.008590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.008618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.009086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.009095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.009515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.009523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.009969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.009975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.010521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.010549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.011001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.011010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.011546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.011574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.011613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.981 [2024-07-25 15:25:03.011640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.981 [2024-07-25 15:25:03.011648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.981 [2024-07-25 15:25:03.011654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.981 [2024-07-25 15:25:03.011659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.981 [2024-07-25 15:25:03.011800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:10.981 [2024-07-25 15:25:03.011941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:10.981 [2024-07-25 15:25:03.012085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.012094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 [2024-07-25 15:25:03.012094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.012095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:10.981 [2024-07-25 15:25:03.012553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.012561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.013028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-07-25 15:25:03.013035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-07-25 15:25:03.013576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.013603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.014052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.014060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.014597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.014624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.014989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.014998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.015555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.015583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.016125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.016133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.016612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.016640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.017182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.017191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.017536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.017544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.018010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.018017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.018570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.018598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.019053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.019062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.019599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.019627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.020080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.020089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.020528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.020556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.021071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.021079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.021514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.021542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.022019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.022029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.022587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.022615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.023083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.023092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.023571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.023579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.024020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.024027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.024597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.024625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.024987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.024995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.025532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.025560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.026018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.026026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.026561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.026589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.027171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.027179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.027720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.027748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.028414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.028442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.028895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-07-25 15:25:03.028904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-07-25 15:25:03.029422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.029450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.029812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.029821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.030226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.030233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.030681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.030691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.031180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.031187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.031634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.031641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.032087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.032094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.032559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.032567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.033002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.033009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.033120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.033132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.033576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.033584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.033932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.033938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.034380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.034387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.034741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.034748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.035189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.035196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.035644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.035651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.036105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.036112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.036602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.036609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.037079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.037085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.037601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.037608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.037963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.037969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.038507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.038535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.038909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.038917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.039447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.039475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.039855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.039864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.040346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.040353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.040802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.040808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.041251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.041258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.041747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.041754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.042192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.042199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.042666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.042673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.042910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.042917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.043369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.043376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.043811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.043818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.044266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.044273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-07-25 15:25:03.044523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-07-25 15:25:03.044530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.044827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.044833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.045297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.045304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.045761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.045768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.045999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.046006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.046149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.046156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.046608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.046614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.046907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.046914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.047354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.047363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.047796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.047802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.048040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.048046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.048193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.048201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.048655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.048662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.048999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.049006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.049591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.049619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.050073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.050082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.050518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.050545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.051031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.051040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.051595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.051623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.052084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.052092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.052452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.052479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.052804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.052813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.053295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.053302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.053761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.053768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.054214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.054222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.054650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.054657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.055093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.055100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.055560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.055567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.056036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.056043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.056488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.056495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.056818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.056825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.057179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.057186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.057636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.057643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.057871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.057877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.058324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.058331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.058860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-07-25 15:25:03.058868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-07-25 15:25:03.059078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.059086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.059295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.059308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.059775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.059782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.060260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.060267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.060585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.060591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.060837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.060844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.060942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.060948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.061409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.061417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.061853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.061859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.062074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.062083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.062528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.062536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.062748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.062757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.063207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.063217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.063770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.063777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.063984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.063992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.064456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.064463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.064897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.064903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.065477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.065484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.065921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.065927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.066369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.066376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.066802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.066808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.067248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.067255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.067728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.067734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.067977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.067984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.068545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.068552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.068980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.068987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.069541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.069568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.070024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.070032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.070405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.070433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.070888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.070897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.071432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.071460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.071912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.071921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.072500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.072529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.072990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.072999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.073546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.073573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.074025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-07-25 15:25:03.074033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-07-25 15:25:03.074505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.074532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.074988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.074997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.075469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.075496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.075854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.075863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.076416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.076443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.076887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.076896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.077338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.077346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.077890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.077897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.078345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.078353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.078904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.078911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.079483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.079510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.079970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.079978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.080551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.080579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.080898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.080907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.081344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.081352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.081598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.081604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.081929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.081938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.082190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.082196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.082666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.082672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.083117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.083123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.083575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.083582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.083808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.083815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.084239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.084246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.084549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.084555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.084858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.084865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.085324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.085331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.085773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.085780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.086222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.086229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.086731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.086738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.087086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.087093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.087508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.087515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.087951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.087958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.088414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.088421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.088784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.088790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.089235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.089242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-07-25 15:25:03.089691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-07-25 15:25:03.089698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.090059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.090065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.090505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.090512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.090964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.090971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.091499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.091527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.091890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.091899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.092347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.092355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.092603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.092610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.093094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.093101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.093551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.093558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.093992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.093999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.094442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.094450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.094889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.094896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.095149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.095155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.095608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.095615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.096055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.096061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.096633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.096661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.097116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.097125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.097510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.097537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.097991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.097999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.098542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.098570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.099025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.099038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.099571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.099600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.100080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.100089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.100313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.100320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.100818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.100824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.101137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.101145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.101587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.101594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.101815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.101821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.102232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.102239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.102693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.102699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-07-25 15:25:03.103132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-07-25 15:25:03.103139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.103634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.103641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.103943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.103950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.104388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.104394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.104843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.104850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.105288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.105295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.105645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.105651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.106084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.106090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.106559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.106565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.107002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.107009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.107455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.107462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.107832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.107838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.108277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.108284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.108750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.108756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.109076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.109083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.109560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.109568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.110067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.110075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.110520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.110548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.110893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.110901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.111471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.111499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.111955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.111964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.112415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.112443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.112899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.112908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.113136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.113143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.113327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.113339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.113904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.113912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.114358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.114365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.114832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.114839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.115193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.115199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.115640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.115647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.116127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.116137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.116247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.116253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.116475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.116481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.116837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.116845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.116934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.116943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.117438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-07-25 15:25:03.117446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-07-25 15:25:03.117888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.117895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.118372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.118380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.118919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.118925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.119259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.119266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.119727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.119733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.120181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.120188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.120632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.120640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.121115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.121123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.121590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.121597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.122032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.122039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.122592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.122620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.123152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.123160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.123710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.123738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.124207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.124217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.124816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.124844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.125451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.125480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.126009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.126018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.126552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.126579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.127028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.127037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.127611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.127639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.128121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.128129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.128669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.128697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.129146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.129155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.129740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.129768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.130025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.130034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.130399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.130426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.130747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.130756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.131198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.131209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.131683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.131690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.131910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.131921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.132399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.132407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.132902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.132909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.133406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.133435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.133892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.133900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.134343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.134354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-07-25 15:25:03.134715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-07-25 15:25:03.134721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.135204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.135211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.135734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.135740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.136177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.136184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.136721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.136750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.137152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.137160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.137643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.137671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.138028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.138037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.138589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.138617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.139077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.139085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.139544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.139551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.139988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.139995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.140554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.140581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.141035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.141043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.141502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.141529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.141982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.141991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.142525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.142552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.143050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.143058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.143516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.143543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.143996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.144004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.144542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.144570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.145056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.145064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.145441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.145468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.145922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.145932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.146476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.146503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.146984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.146993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.147421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.147448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.147902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.147910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.148446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.148474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.149006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.149015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.149446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.149473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.149925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.149933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.150465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.150492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.150974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.150984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.151211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.151219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.151667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.151673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-07-25 15:25:03.152157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-07-25 15:25:03.152164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-07-25 15:25:03.152473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-07-25 15:25:03.152480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-07-25 15:25:03.152914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.152922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.153352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.153364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.153611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.153617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.154084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.154090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.154566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.154572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.155004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.155011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.155445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.155453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.155696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.155702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.156158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.156164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.156528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.156536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.156894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.156900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.157393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.157401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.157786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.157793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.158145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.158152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.158631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.158637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.159074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.159081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.159693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.159720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.160048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.261 [2024-07-25 15:25:03.160056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.261 qpair failed and we were unable to recover it. 00:29:11.261 [2024-07-25 15:25:03.160616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.160643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.161127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.161136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.161676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.161704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.162159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.162168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.162788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.162815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.163397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.163425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.163825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.163834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.164084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.164091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.164573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.164580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.165019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.165025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.165559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.165587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.166048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.166057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.166487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.166515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.166774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.166783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.167248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.167260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.167599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.167607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.168087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.168093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.168607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.168614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.169055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.169062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.169379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.169386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.169620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.169626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.169848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.169855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.170357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.170365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.170599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.170606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.171057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.171064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.171503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.171509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.171982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.171989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.172424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.172431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.172672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.172679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.173162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.173169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.173605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.173612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.173970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.173977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.174548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.174577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.175064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.175073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.175666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.175694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.262 qpair failed and we were unable to recover it. 00:29:11.262 [2024-07-25 15:25:03.176148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-07-25 15:25:03.176156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.176699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.176726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.177180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.177189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.177530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.177557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.177957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.177965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.178504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.178533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.178978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.178987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.179402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.179430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.179885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.179894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.180412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.180440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.180972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.180980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.181506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.181534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.181993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.182002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.182413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.182444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.182973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.182981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.183189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.183208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.183667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.183674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.184119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.184125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.184576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.184583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.185065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.185072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.185491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.185519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.186010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.186019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.186622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.186649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.186901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.186910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.187460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.187488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.187717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.187725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.188155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.188161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.188603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.188611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.189051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.189057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.189465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.189492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.189766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.189775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.190233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.190241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.190709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.190716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.191155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.191162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.191530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.191537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.263 [2024-07-25 15:25:03.192011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-07-25 15:25:03.192017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.263 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.192461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.192488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.192740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.192748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.193237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.193244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.193686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.193693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.194131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.194137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.194596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.194603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.194926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.194933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.195289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.195296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.195757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.195763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.196282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.196288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.196748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.196755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.197187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.197194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.197512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.197518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.197961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.197968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.198445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.198452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.198941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.198947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.199411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.199439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.199923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.199931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.200179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.200186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.200639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.200649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.200988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.200994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.201525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.201552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.201807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.201815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.202250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.202258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.202717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.202724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.203037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.203045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.203535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.203541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.203770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.203776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.204239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.204246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.204690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.204696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.205129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.205137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.205596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.205602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.205845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.205851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.206093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.206100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.264 qpair failed and we were unable to recover it. 00:29:11.264 [2024-07-25 15:25:03.206465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-07-25 15:25:03.206472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.206902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.206909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.207425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.207432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.207816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.207822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.208272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.208278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.208640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.208646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.209102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.209108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.209422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.209429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.209912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.209919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.210351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.210358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.210836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.210842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.211276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.211282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.211754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.211760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.212235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.212243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.212697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.212704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.213138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.213145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.213592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.213598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.214032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.214038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.214252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.214265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.214689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.214696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.215177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.215183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.215700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.215707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.215926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.215933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.216211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.216221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.216704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.216710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.217125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.217134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.217578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.217585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.218017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.218023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.218561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.218589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.219055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.219064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.219432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.219459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.219941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.219950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.220528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.220555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.221010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.221019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.221429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.221457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.265 qpair failed and we were unable to recover it. 00:29:11.265 [2024-07-25 15:25:03.221684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.265 [2024-07-25 15:25:03.221692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.222175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.222182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.222621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.222628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.222984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.222991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.223566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.223594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.223961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.223970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.224523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.224550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.225012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.225020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.225558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.225586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.226073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.226081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.226708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.226735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.227196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.227216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.227775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.227802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.228404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.228431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.228883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.228891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.229447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.229474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.230007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.230015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.230567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.230594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.231052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.231061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.231626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.231654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.232108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.232117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.232674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.232701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.233158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.233167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.233738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.233765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.234222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.234240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.234770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.234777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.235412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.235440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.235898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.235906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.236131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.266 [2024-07-25 15:25:03.236138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.266 qpair failed and we were unable to recover it. 00:29:11.266 [2024-07-25 15:25:03.236643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.236650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.236954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.236963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.237495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.237523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.238008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.238016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.238243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.238257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.238709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.238716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.239149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.239156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.239398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.239405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.239803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.239810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.240095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.240102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.240561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.240568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.241006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.241012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.241447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.241454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.241906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.241912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.242383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.242410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.242923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.242932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.243519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.243548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.243913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.243922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.244378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.244385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.244879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.244886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.245439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.245466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.245918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.245927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.246399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.246427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.246793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.246802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.247280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.247287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.247704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.247711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.248042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.248049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.248503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.248509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.248922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.248929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.249151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.249159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.249380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.249387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.249623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.249631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.250092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.250098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.250562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.250569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.250820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.250827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.267 qpair failed and we were unable to recover it. 00:29:11.267 [2024-07-25 15:25:03.251298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.267 [2024-07-25 15:25:03.251305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.251771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.251778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.252209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.252217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.252662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.252669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.253104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.253110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.253415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.253422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.253875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.253884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.254364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.254371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.254851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.254857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.255306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.255313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.255577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.255583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.255827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.255833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.256168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.256175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.256413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.256420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.256917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.256924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.257356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.257362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.257593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.257599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.258036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.258043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.258481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.258488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.258710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.258716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.259169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.259175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.259420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.259428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.259863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.259871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.260323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.260330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.260793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.260800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.261276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.261283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.261727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.261734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.262213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.262220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.262653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.262660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.263027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.263033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.263496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.263503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.263949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.263956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.264264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.264271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.264714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.264721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.265153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.268 [2024-07-25 15:25:03.265160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.268 [2024-07-25 15:25:03.265520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.265527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.265955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.265962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.266405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.266412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.266851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.266858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.267418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.267445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.267934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.267943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.268481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.268508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.269047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.269055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.269500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.269528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.269980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.269989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.270566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.270593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.270948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.270960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.271211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.271219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.271733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.271740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.272176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.272183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.272751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.272780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.273413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.273440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.273966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.273975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.274553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.274581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.274836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.274844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.275395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.275422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.275904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.275913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.276447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.276475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.276931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.276940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.277483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.277511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.278003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.278012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.278426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.278453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.278906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.278915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.279545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.279573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.279823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.279831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.280234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.280241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.280688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.280695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.280935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.280942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.281252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.281259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.281713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.281720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-07-25 15:25:03.282153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-07-25 15:25:03.282160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.282521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.282528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.282964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.282971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.283458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.283465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.283898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.283905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.284154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.284160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.284642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.284649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.285085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.285093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.285566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.285573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.286027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.286033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.286557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.286585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.287070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.287078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.287703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.287731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.288195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.288208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.288748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.288776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.289003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.289014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.289586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.289617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.290066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.290075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.290612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.290640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.290895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.290904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.291463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.291491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.292012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.292021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.292471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.292498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.292721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.292733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.292992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.293000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.293467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.293474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.293914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.293920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.294372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.294379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.294848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.294855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.295295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.295302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.295737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.295743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.296061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.296068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.296338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.296345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.296812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.296819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-07-25 15:25:03.297304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-07-25 15:25:03.297310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.297786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.297793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.298175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.298181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.298623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.298630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.299089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.299095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.299444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.299451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.299926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.299932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.300473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.300501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.300982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.300990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.301378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.301405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.301750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.301759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.302248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.302256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.302701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.302707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.303144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.303151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.303595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.303602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.303826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.303833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.304096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.304103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.304560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.304566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.305045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.305051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.305296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.305303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.305495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.305502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.305986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.305992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.306426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.306436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.306872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.306878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.307314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.307320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.307766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.307772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.308126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.308133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.308589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.308596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.309073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.309081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.309527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.309555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.310018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-07-25 15:25:03.310027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-07-25 15:25:03.310574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.310601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.311056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.311064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.311470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.311498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.311950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.311959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.312368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.312395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.312911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.312920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.313167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.313173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.313318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.313325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.313760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.313766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.314206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.314213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.314669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.314675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.314875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.314882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.315356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.315363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.315799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.315805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.316046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.316053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.316517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.316523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.317016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.317022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.317560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.317588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.318119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.318128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.318668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.318695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.319149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.319157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.319615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.319622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.319864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.319871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.320098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.320106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.320577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.320583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.321018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.321024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.321477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.321505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.321957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.321965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.322533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.322560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.323040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.323049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.323621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.323649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.323970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.323982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.324411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.324438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.324667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.324679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.325144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.325151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-07-25 15:25:03.325605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-07-25 15:25:03.325612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.326122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.326129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.326334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.326344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.326816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.326823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.327263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.327270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.327707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.327714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.328233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.328240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.328680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.328687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.329143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.329149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.329598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.329605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.330039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.330046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.330477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.330483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.330965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.330972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.331047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.331057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.331536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.331543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.331980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.331986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.332517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.332544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.333084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.333093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.333399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.333406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.333640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.333651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.334088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.334094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.334344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.334351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.334568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.334575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.335007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.335014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.335599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.335606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.335954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.335960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.336394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.336401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.336881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.336888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.337139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.337145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.337496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.337502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.337870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.337877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.338360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.338367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.338584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.338594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.338706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.338712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-07-25 15:25:03.339166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-07-25 15:25:03.339172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.339667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.339673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.340118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.340127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.340636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.340643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.340886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.340892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.341377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.341383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.341861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.341868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.342308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.342314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.342763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.342770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.343215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.343222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.343655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.343662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.344103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.344109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.344568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.344575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.345037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.345045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.345528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.345535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.345988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.345994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.346565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.346593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.346820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.346828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.347298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.347305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.347513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.347520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.348017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.348023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.348462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.348469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.348984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.348991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.349518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.349546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.349999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.350008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.350373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.350401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.350720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.350729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.351049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.351056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.351277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.351285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.351677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.351685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.351995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.352002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.352451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.352458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.352892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.352898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.353334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.353341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.353687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.353694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.354157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-07-25 15:25:03.354163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-07-25 15:25:03.354612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.354619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.355097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.355103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.355564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.355570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.355928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.355935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.356480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.356507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.356963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.356971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.357507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.357537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.357988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.357996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.358475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.358503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.358962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.358970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.359384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.359411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.359766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.359774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.360243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.360250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.360731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.360737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.361214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.361220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.361647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.361654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.362093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.362100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.362466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.362473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.362921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.362927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.363363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.363370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.363733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.363740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.364084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.364090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.364492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.364498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.364979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.364985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.365299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.365305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.365747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.365754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.366097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.366104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.366557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.366564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.367043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.367050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.367278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.367285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.367738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.367744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.368179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.368185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.368621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.368627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.369103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.369110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-07-25 15:25:03.369362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-07-25 15:25:03.369369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.369890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.369896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.370328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.370335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.370785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.370792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.371241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.371247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.371638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.371644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.372121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.372128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.372349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.372355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.372813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.372819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.373296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.373303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.373748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.373755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.374219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.374226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.374438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.374446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.374895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.374901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.375220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.375232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.375683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.375690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.376042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.376049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.376485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.376491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.376645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.376658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.377098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.377104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.377325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.377331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.377569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.377575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.377794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.377805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.378259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.378267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.378743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.378749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.379185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.379192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.379552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.379560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.379786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.379792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.380140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.380147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-07-25 15:25:03.380468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-07-25 15:25:03.380475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.380726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.380733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.381180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.381187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.381408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.381417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.381866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.381872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.382210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.382217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.382671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.382678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.383159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.383165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.383602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.383609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.384045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.384051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.384529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.384537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.385012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.385019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.385131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.385139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.385481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.385488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.385625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.385632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.386064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.386070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.386552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.386559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.387002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.387010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.387556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.387584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.387810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.387821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.388291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.388299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.388749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.388755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.389194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.389203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.389664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.389671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.390107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.390113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.390561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.390568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.391008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.391014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.391574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.391602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.392082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.392091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.392562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.392570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.392812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.392819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.393180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.393187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.393626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.393633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.394066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.394072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.394605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.394633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-07-25 15:25:03.395173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-07-25 15:25:03.395182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.395731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.395758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.396223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.396241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.396699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.396706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.396952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.396958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.397150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.397156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.397590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.397597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.397821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.397827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.398062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.398068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.398448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.398455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.398893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.398899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.399426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.399432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.399869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.399875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.400451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.400478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.400932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.400941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.401419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.401449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.401700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.401708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.402171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.402178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.402542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.402550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.403011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.403017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.403564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.403592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.404129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.404138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.404461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.404468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.404947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.404953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.405483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.405510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.405866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.405874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.406334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.406341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.406802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.406808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.407254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.407260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.407701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.407707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.408065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.408072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.408536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.408543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.408980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.408986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.409424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.409452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.409909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.409918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-07-25 15:25:03.410460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-07-25 15:25:03.410487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.410761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.410769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.411223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.411230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.411690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.411696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.412176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.412183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.412628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.412635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.413073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.413080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.413624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.413651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.414106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.414115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.414570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.414577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.414825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.414832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.415197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.415207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.415646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.415653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.416093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.416099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.416416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.416444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.416908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.416916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.417237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.417244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.417700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.417706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.417921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.417928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.418397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.418404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.418840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.418849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.419070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.419077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.419301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.419308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.419774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.419781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.420091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.420097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.420323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.420330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.420837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.420844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.421044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.421050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.421507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.421514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.421948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.421955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.422432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.422439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.422873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.422879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.423442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.423469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.423949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.423958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.424491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-07-25 15:25:03.424519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-07-25 15:25:03.424972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.424980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.425513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.425540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.426069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.426077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.426485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.426512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.426968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.426976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.427203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.427211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.427756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.427784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.428400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.428427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.428795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.428803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.429016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.429026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.429476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.429483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.429964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.429971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.430540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.430567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.430822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.430831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.431323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.431330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.431570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.431581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.432046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.432053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.432414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.432421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.432880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.432886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.432995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.433002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.433488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.433495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.433931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.433938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.434448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.434454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.434688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.434694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.435139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.435145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.435603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.435614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.436134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.436141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.436584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.436591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.437026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.437032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.437407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.437434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.437891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.437899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.438424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.438452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.438707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.438719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.438949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.438959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.439286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-07-25 15:25:03.439302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-07-25 15:25:03.439761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-07-25 15:25:03.439768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-07-25 15:25:03.440213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-07-25 15:25:03.440220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-07-25 15:25:03.440422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-07-25 15:25:03.440430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-07-25 15:25:03.440903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-07-25 15:25:03.440909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-07-25 15:25:03.441349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-07-25 15:25:03.441357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-07-25 15:25:03.441558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-07-25 15:25:03.441565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-07-25 15:25:03.442063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-07-25 15:25:03.442069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.442558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.442567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.550 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.443091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.443098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.550 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.443531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.443538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.550 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.443971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.443977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.550 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.444423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.444430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.550 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.444742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.444748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.550 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.445204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.445211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.550 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.445454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.445460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.550 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.445913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.445919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.550 qpair failed and we were unable to recover it. 00:29:11.550 [2024-07-25 15:25:03.446481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.550 [2024-07-25 15:25:03.446509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.446961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.446969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.447516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.447544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.447995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.448004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.448584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.448611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.448865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.448873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.449219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.449226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.449648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.449655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.449874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.449881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.450149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.450156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.450474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.450480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.450722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.450729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.451210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.451216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.451670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.451676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.452111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.452121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.452577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.452584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.452933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.452940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.453386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.453393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.453833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.453840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.454274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.454280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.454662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.454669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.455120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.455127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.455490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.455497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.456008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.456014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.456492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.456499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.456934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.456940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.457473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.457500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.457714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.457722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.458146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.458154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.458583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.458590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.459022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.459029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.459555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.459582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.460113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.460122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.460478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.460495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.460959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.551 [2024-07-25 15:25:03.460965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.551 qpair failed and we were unable to recover it. 00:29:11.551 [2024-07-25 15:25:03.461522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.461550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.461972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.461981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.462581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.462608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.463066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.463075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.463532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.463560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.463924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.463933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.464531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.464559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.464911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.464919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.465480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.465508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.465983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.465991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.466599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.466626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.467080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.467089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.467554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.467562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.467872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.467878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.468229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.468236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.468325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.468332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.468770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.468776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.469210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.469218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.469647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.469654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.470130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.470140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.470282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.470289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.470808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.470815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.471249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.471255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.471697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.471703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.471923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.471929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.472129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.472137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.472580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.472587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.473028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.473034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.473381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.473387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.473702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.473708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.474186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.474192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.474726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.474732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.475052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.475058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.475637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.475664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.476179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.476187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.552 [2024-07-25 15:25:03.476608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.552 [2024-07-25 15:25:03.476635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.552 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.477090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.477099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.477343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.477351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.477695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.477703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.478139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.478146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.478659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.478666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.479124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.479130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.479648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.479655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.480141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.480147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.480484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.480512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.480844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.480853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.481334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.481342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.481839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.481846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.482282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.482288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.482602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.482609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.483050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.483057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.483495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.483502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.483821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.483828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.484280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.484287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.484776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.484783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.485219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.485226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.485731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.485737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.485946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.485955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.486420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.486428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.486867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.486876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.487101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.487107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.487586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.487592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.487812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.487819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.488273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.488280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.488525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.488532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.488989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.488996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.489450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.489457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.489891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.489898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.553 [2024-07-25 15:25:03.490212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.553 [2024-07-25 15:25:03.490220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.553 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.490542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.490549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.490984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.490991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.491515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.491543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.491796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.491805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.492269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.492277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.492733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.492740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.493181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.493188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.493579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.493587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.494021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.494029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.494599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.494626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.495113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.495122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.495579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.495587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.496062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.496069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.496641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.496669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.497155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.497164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.497718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.497746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.498214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.498223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.498752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.498779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.499425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.499453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.499907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.499916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.500450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.500477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.500965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.500974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.501463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.501491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.501945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.501954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.502530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.502558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.502810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.502819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.503131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.503138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.503397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.503404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.503740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.503746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.504007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.504013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.504470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.504480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.504917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.504924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.505365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.505372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.505806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.505812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.506246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.506253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.554 [2024-07-25 15:25:03.506695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.554 [2024-07-25 15:25:03.506702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.554 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.507136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.507143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.507514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.507521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.507987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.507993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.508453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.508460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.508936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.508943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.509413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.509441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.509877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.509885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.510366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.510394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.510915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.510924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.511398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.511405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.511716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.511724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.512072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.512079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.512617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.512645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.513178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.513186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.513554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.513562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.514033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.514039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.514569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.514597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.514963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.514971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.515562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.515590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.516042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.516051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.516394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.516421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.516784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.516793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.517396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.517423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.517978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.517986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.518458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.518485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.518966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.518974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.519501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.519528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.519980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.555 [2024-07-25 15:25:03.519989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.555 qpair failed and we were unable to recover it. 00:29:11.555 [2024-07-25 15:25:03.520416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.520443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.520675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.520683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.521116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.521123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.521575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.521582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.521826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.521833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.522314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.522321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.522634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.522646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.523121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.523127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.523576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.523584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.524017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.524024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.524276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.524283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.524724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.524731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.525165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.525171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.525482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.525489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.525738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.525744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.526202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.526209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.526657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.526663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.527099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.527105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.527555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.527563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.527879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.527886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.528363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.528370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.528848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.528854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.529291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.529298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.529741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.529748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.530253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.530260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.530707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.530714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.531150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.531157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.531591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.531598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.532029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.532035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.532615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.532643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.533124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.533133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.533530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.533537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.533979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.533985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.534525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.534553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.534805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.556 [2024-07-25 15:25:03.534813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.556 qpair failed and we were unable to recover it. 00:29:11.556 [2024-07-25 15:25:03.535258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.535265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.535739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.535745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.536225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.536233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.536686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.536693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.536776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.536788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.537215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.537223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.537339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.537345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.537742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.537748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.537955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.537961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.538456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.538462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.538808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.538814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.539129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.539138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.539586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.539593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.540027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.540034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.540391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.540398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.540888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.540894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.541133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.541139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.541367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.541374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.541840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.541847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.542285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.542292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.542500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.542507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.542985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.542991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.543208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.543218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.543575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.543581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.544021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.544028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.544388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.544416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.544671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.544683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.545170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.545177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.545666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.545673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.545797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.545803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.546301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.546308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.546675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.546682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.547121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.547127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.547661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.547668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.548114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.548121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.548339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.557 [2024-07-25 15:25:03.548346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.557 qpair failed and we were unable to recover it. 00:29:11.557 [2024-07-25 15:25:03.548847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.548853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.549286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.549292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.549726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.549733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.550209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.550217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.550647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.550653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.551134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.551141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.551594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.551601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.552033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.552039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.552482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.552490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.552937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.552944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.553472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.553500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.553954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.553963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.554219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.554235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.554679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.554686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.555128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.555134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.555581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.555591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.555841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.555848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.556294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.556302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.556737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.556744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.557181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.557189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.557639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.557646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.558006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.558014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.558479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.558506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.558966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.558975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.559508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.559535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.560028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.560036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.560596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.560623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.561073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.561082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.561694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.561721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.562178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.562186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.562718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.562745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.563198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.563210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.563767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.563794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.564169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.564178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.564597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.564624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.558 qpair failed and we were unable to recover it. 00:29:11.558 [2024-07-25 15:25:03.565087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.558 [2024-07-25 15:25:03.565095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.565531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.565559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.566012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.566021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.566551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.566579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.567034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.567042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.567574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.567602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.567974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.567983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.568431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.568459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.568912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.568921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.569456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.569483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.569964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.569973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.570522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.570549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.570918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.570927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.571484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.571511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.571748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.571757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.572227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.572235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.572741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.572748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.572996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.573002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.573232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.573240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.573498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.573505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.573863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.573874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.574359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.574367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.574727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.574734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.574984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.574991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.575454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.575462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.575681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.575687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.576024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.576031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.576487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.576495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.576945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.576952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.577387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.577394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.577844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.577851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.578285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.578292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.578749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.578756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.579233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.579240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.579693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.579700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.559 qpair failed and we were unable to recover it. 00:29:11.559 [2024-07-25 15:25:03.579810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.559 [2024-07-25 15:25:03.579817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.580253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.580260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.580640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.580647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.581082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.581089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.581307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.581314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.581538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.581550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.582096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.582103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.582554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.582561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.582786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.582792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.583014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.583021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.583481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.583491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.584027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.584034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.584470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.584477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.584986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.584993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.585445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.585473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.585696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.585708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.586187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.586194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.586545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.586553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.587027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.587034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.587440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.587469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.587692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.587704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.587902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.587912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.588298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.588306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.588551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.588558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.589046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.589053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.589492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.589504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.589942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.589949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.590384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.590391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.590869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.590876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.560 [2024-07-25 15:25:03.591308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.560 [2024-07-25 15:25:03.591316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.560 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.591758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.591765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.592203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.592211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.592667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.592675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.592993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.593000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.593593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.593620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.594100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.594109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.594568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.594575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.595031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.595038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.595563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.595591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.596036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.596046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.596641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.596669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.597126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.597135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.597670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.597698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.598180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.598188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.598761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.598789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.599430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.599458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.599921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.599930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.600465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.600492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.600731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.600739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.601209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.601216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.601451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.601458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.601775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.601782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.602166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.602173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.602444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.602451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.602770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.602777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.603254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.603260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.603384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.603390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.603807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.603814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.604053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.604060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.604423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.604430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.604788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.604795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.605142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.605148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.605613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.605621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.605961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.605967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.606279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.606285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.561 [2024-07-25 15:25:03.606657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.561 [2024-07-25 15:25:03.606664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.561 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.607114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.607121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.607489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.607496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.607863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.607870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.608328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.608334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.608670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.608676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.609128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.609134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.609592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.609599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.610035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.610041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.610481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.610488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.610904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.610910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.611472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.611500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.611874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.611883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.612194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.612204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.612645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.612652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.613006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.613012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.613550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.613577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.614038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.614046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.614578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.614606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.615115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.615124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.615660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.615687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.616086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.616095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.616454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.616461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.616784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.616791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.617139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.617146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.617457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.617463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.617709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.617715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.618197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.618212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.618659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.618666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.619145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.619151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.619609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.619616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.620057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.620064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.620633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.620660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.620770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.562 [2024-07-25 15:25:03.620778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.562 qpair failed and we were unable to recover it. 00:29:11.562 [2024-07-25 15:25:03.621215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.621223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.621656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.621663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.621889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.621896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.622356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.622362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.622583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.622589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.622913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.622920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.623175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.623182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.623730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.623737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.624169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.624176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.624658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.624665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.624894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.624901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.625252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.625259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.625568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.625576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.626067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.626073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.626429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.626436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.626875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.626882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.627126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.627132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.627468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.627475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.627957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.627963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.628331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.628337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.628789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.628796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.629109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.629115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.629563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.629569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.629791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.629802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.629996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.630003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.630473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.630480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.630955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.630962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.631348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.631355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.631805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.631811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.632250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.632257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.632738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.632745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.633062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.633068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.633486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.633493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-25 15:25:03.633974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.633984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:11.563 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:11.563 [2024-07-25 15:25:03.634406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.563 [2024-07-25 15:25:03.634438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.564 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.564 [2024-07-25 15:25:03.634963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.634972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.564 [2024-07-25 15:25:03.635500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.635528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.636064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.636073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.636489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.636516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.636979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.636988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.637216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.637230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.637717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.637725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.638158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.638166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.638710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.638738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.639414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.639446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.639690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.639699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.640173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.640180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.640716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.640724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.641183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.641191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.641549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.641576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.641940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.641949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.642183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.642190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.642662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.642670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.643108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.643115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.643662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.643689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.643909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.643918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.644164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.644171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.644699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.644707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.645142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.645149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.645741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.645768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.646439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.646467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.646787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.646797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.647148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.647156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.647637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.647645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.647954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.647961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.648486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.648514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.648761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.648769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.649226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.649233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.649703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.649710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-25 15:25:03.650150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.564 [2024-07-25 15:25:03.650157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.650382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.650389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.650863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.650870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.651315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.651322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.651822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.651829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.651952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.651959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.652430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.652437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.652873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.652880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.653122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.653128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.653221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.653227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.653546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.653553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.654036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.654043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.654494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.654501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.654971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.654979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.655355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.655364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.655823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.655831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.656265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.656271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.656609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.656616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.656985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.656992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.657425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.657432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.657769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.657776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.658254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.658261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.658597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.658604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.658844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.658853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.659298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.659306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.659744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.659751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.660191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.660198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.660636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.660643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.661073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.661080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.661618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.661646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.662102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.565 [2024-07-25 15:25:03.662111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-25 15:25:03.662466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.662474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.662919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.662927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.663504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.663532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.664073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.664081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.664675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.664702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.665069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.665078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.665563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.665590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.665959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.665968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.666520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.666548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.666776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.666785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.667273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.667281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.667723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.667731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.667974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.667981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.668240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.668249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.668462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.668469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.668926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.668933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.669157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.669163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.669388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.669395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.669894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.669900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.670334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.670340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.670775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.670782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.671001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.671008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.671455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.671462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.671896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.671903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.672343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.672350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.672818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.672824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.673263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.673269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.566 [2024-07-25 15:25:03.673707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.673715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:11.566 [2024-07-25 15:25:03.673936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.673944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.566 [2024-07-25 15:25:03.674425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.566 [2024-07-25 15:25:03.674433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.674545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.674551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.674996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.675003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.675433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.675440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-07-25 15:25:03.675875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-07-25 15:25:03.675882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.676314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.676321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.676766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.676774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.677233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.677241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.677685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.677691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.678127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.678133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.678612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.678619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.679049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.679056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.679513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.679520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.679959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.679966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.680581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.680609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.681066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.681074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.681611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.681639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.681867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.681875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.682074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.682086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.682310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.682317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.682575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.682588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.683053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.683060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.683498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.683505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.683856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.683863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.684071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.684080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.684394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.684403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.684859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.684866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.685301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.685307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.685837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.685843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.686299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.686306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.686763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.686770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.687293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.687300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.687782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.687789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.688222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.688229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.688693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.688699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.689136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.689143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.689516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.689522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 Malloc0 00:29:11.567 [2024-07-25 15:25:03.689873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.689880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-07-25 15:25:03.690323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-07-25 15:25:03.690330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.567 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:11.568 [2024-07-25 15:25:03.690819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.690826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.568 [2024-07-25 15:25:03.691215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.691223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.568 [2024-07-25 15:25:03.691666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.691672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.692160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.692167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.692602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.692609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.692996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.693003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.693452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.693459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.693702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.693708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.694088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.694095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.694553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.694560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.694918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.694925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.695373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.695379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.695521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.695528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.695625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.695637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.696108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.696116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.696551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.696558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.696995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.697001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.697120] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.568 [2024-07-25 15:25:03.697458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.697466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.697948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.697955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.698431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.698438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.698871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.698877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.699186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.699192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.699642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.699649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.700089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.700095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.700598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.700605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.701076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.701082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.701620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.701647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.701883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.701892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.702474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.702502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.702739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.702748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.703215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.703223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.703643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.703650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.704138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.704144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.704631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.704638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-07-25 15:25:03.705131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-07-25 15:25:03.705138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.705691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.705718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.569 [2024-07-25 15:25:03.706176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.706192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.706620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.706648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:11.569 [2024-07-25 15:25:03.706832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.706842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.569 [2024-07-25 15:25:03.707282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.707290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.569 [2024-07-25 15:25:03.707666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.707673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.708119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.708126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.708586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.708593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.709014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.709020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.709510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.709517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.709983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.709990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.710447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.710475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.710980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.710989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.711558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.711585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.712042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.712051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.712605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.712632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.712913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.712921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.713478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.713506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.713740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.713749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.714205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.714212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.714773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.714780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.715131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.715138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.715548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.715578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.715939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.715948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.716396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.716404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.716842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.716848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.717389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.717416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.717919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.717927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.569 [2024-07-25 15:25:03.718483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.718511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.569 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.569 [2024-07-25 15:25:03.718970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.718979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.569 [2024-07-25 15:25:03.719602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.719630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.720093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-07-25 15:25:03.720102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-07-25 15:25:03.720461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.720468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.720913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.720920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.721479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.721506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.722064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.722073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.722663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.722691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.723057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.723065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.723501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.723528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.724012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.724020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.724596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.724624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.725078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.725087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.725545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.725572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.725931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.725939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.726411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.726439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.726908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.726917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.727382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.727389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.727895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.727902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.728433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.728461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.728915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.728924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.729406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.729434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.729929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.729938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.570 [2024-07-25 15:25:03.730489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.730517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.570 [2024-07-25 15:25:03.730787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.730795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.570 [2024-07-25 15:25:03.731055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.731062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.570 [2024-07-25 15:25:03.731551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.731558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.732004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.732010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.732401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.732429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-07-25 15:25:03.732879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-07-25 15:25:03.732888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.832 [2024-07-25 15:25:03.733471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.832 [2024-07-25 15:25:03.733499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.832 qpair failed and we were unable to recover it. 00:29:11.832 [2024-07-25 15:25:03.733953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.832 [2024-07-25 15:25:03.733962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.734446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.833 [2024-07-25 15:25:03.734474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.734964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.833 [2024-07-25 15:25:03.734973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.735507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.833 [2024-07-25 15:25:03.735534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.735970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.833 [2024-07-25 15:25:03.735978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.736430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.833 [2024-07-25 15:25:03.736457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.736915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.833 [2024-07-25 15:25:03.736923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.737403] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.833 [2024-07-25 15:25:03.737472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.833 [2024-07-25 15:25:03.737498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc90000b90 with addr=10.0.0.2, port=4420 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.833 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.833 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.833 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.833 [2024-07-25 15:25:03.748009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.833 [2024-07-25 15:25:03.748136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.833 [2024-07-25 15:25:03.748152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.833 [2024-07-25 15:25:03.748162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.833 [2024-07-25 15:25:03.748166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.833 [2024-07-25 15:25:03.748182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.833 15:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 435789 00:29:11.833 [2024-07-25 15:25:03.757977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.833 [2024-07-25 15:25:03.758064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.833 [2024-07-25 15:25:03.758079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.833 [2024-07-25 15:25:03.758084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.833 [2024-07-25 15:25:03.758088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.833 [2024-07-25 15:25:03.758101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.767960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.833 [2024-07-25 15:25:03.768050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.833 [2024-07-25 15:25:03.768063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.833 [2024-07-25 15:25:03.768069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.833 [2024-07-25 15:25:03.768073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.833 [2024-07-25 15:25:03.768085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.777952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.833 [2024-07-25 15:25:03.778046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.833 [2024-07-25 15:25:03.778060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.833 [2024-07-25 15:25:03.778065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.833 [2024-07-25 15:25:03.778069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.833 [2024-07-25 15:25:03.778081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.787958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.833 [2024-07-25 15:25:03.788044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.833 [2024-07-25 15:25:03.788057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.833 [2024-07-25 15:25:03.788062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.833 [2024-07-25 15:25:03.788067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.833 [2024-07-25 15:25:03.788081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.798014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.833 [2024-07-25 15:25:03.798098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.833 [2024-07-25 15:25:03.798111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.833 [2024-07-25 15:25:03.798117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.833 [2024-07-25 15:25:03.798121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.833 [2024-07-25 15:25:03.798132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.808020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.833 [2024-07-25 15:25:03.808103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.833 [2024-07-25 15:25:03.808116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.833 [2024-07-25 15:25:03.808121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.833 [2024-07-25 15:25:03.808125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.833 [2024-07-25 15:25:03.808137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.818017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.833 [2024-07-25 15:25:03.818100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.833 [2024-07-25 15:25:03.818113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.833 [2024-07-25 15:25:03.818119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.833 [2024-07-25 15:25:03.818123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.833 [2024-07-25 15:25:03.818134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.833 qpair failed and we were unable to recover it. 00:29:11.833 [2024-07-25 15:25:03.827948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.833 [2024-07-25 15:25:03.828034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.828047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.828052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.828056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.828067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.837977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.838062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.838074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.838079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.838084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.838095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.848093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.848180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.848193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.848198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.848206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.848217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.858309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.858395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.858408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.858413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.858417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.858428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.868208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.868293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.868306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.868311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.868315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.868327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.878087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.878168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.878180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.878186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.878193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.878208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.888159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.888237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.888250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.888255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.888259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.888270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.898288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.898396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.898408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.898414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.898418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.898429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.908311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.908397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.908410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.908415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.908419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.908430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.918342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.918431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.918445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.918450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.918454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.918466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.928267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.928349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.928362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.928367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.928372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.928383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.938350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.938435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.938448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.938453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.938457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.938468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.834 [2024-07-25 15:25:03.948422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.834 [2024-07-25 15:25:03.948506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.834 [2024-07-25 15:25:03.948518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.834 [2024-07-25 15:25:03.948523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.834 [2024-07-25 15:25:03.948528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.834 [2024-07-25 15:25:03.948539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.834 qpair failed and we were unable to recover it. 00:29:11.835 [2024-07-25 15:25:03.958351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.835 [2024-07-25 15:25:03.958439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.835 [2024-07-25 15:25:03.958451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.835 [2024-07-25 15:25:03.958456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.835 [2024-07-25 15:25:03.958460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.835 [2024-07-25 15:25:03.958472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.835 qpair failed and we were unable to recover it. 00:29:11.835 [2024-07-25 15:25:03.968481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.835 [2024-07-25 15:25:03.968560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.835 [2024-07-25 15:25:03.968573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.835 [2024-07-25 15:25:03.968581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.835 [2024-07-25 15:25:03.968585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.835 [2024-07-25 15:25:03.968597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.835 qpair failed and we were unable to recover it. 00:29:11.835 [2024-07-25 15:25:03.978506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.835 [2024-07-25 15:25:03.978589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.835 [2024-07-25 15:25:03.978601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.835 [2024-07-25 15:25:03.978606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.835 [2024-07-25 15:25:03.978610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.835 [2024-07-25 15:25:03.978621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.835 qpair failed and we were unable to recover it. 00:29:11.835 [2024-07-25 15:25:03.988535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.835 [2024-07-25 15:25:03.988621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.835 [2024-07-25 15:25:03.988633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.835 [2024-07-25 15:25:03.988638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.835 [2024-07-25 15:25:03.988643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.835 [2024-07-25 15:25:03.988654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.835 qpair failed and we were unable to recover it. 00:29:11.835 [2024-07-25 15:25:03.998556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.835 [2024-07-25 15:25:03.998635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.835 [2024-07-25 15:25:03.998647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.835 [2024-07-25 15:25:03.998652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.835 [2024-07-25 15:25:03.998656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.835 [2024-07-25 15:25:03.998667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.835 qpair failed and we were unable to recover it. 00:29:11.835 [2024-07-25 15:25:04.008562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.835 [2024-07-25 15:25:04.008642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.835 [2024-07-25 15:25:04.008655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.835 [2024-07-25 15:25:04.008660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.835 [2024-07-25 15:25:04.008664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.835 [2024-07-25 15:25:04.008676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.835 qpair failed and we were unable to recover it. 00:29:11.835 [2024-07-25 15:25:04.018771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.835 [2024-07-25 15:25:04.018865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.835 [2024-07-25 15:25:04.018877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.835 [2024-07-25 15:25:04.018882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.835 [2024-07-25 15:25:04.018886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:11.835 [2024-07-25 15:25:04.018897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.835 qpair failed and we were unable to recover it. 00:29:12.097 [2024-07-25 15:25:04.028687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.097 [2024-07-25 15:25:04.028784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.097 [2024-07-25 15:25:04.028803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.097 [2024-07-25 15:25:04.028810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.097 [2024-07-25 15:25:04.028815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.097 [2024-07-25 15:25:04.028830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-07-25 15:25:04.038693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.097 [2024-07-25 15:25:04.038781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.097 [2024-07-25 15:25:04.038801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.097 [2024-07-25 15:25:04.038807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.097 [2024-07-25 15:25:04.038811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.097 [2024-07-25 15:25:04.038827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-07-25 15:25:04.048737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.097 [2024-07-25 15:25:04.048825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.097 [2024-07-25 15:25:04.048838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.097 [2024-07-25 15:25:04.048843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.097 [2024-07-25 15:25:04.048848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.097 [2024-07-25 15:25:04.048860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-07-25 15:25:04.058608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.097 [2024-07-25 15:25:04.058693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.097 [2024-07-25 15:25:04.058712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.097 [2024-07-25 15:25:04.058718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.097 [2024-07-25 15:25:04.058722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.097 [2024-07-25 15:25:04.058734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-07-25 15:25:04.068713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.097 [2024-07-25 15:25:04.068799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.097 [2024-07-25 15:25:04.068812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.097 [2024-07-25 15:25:04.068817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.097 [2024-07-25 15:25:04.068822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.097 [2024-07-25 15:25:04.068833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-07-25 15:25:04.078720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.097 [2024-07-25 15:25:04.078798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.097 [2024-07-25 15:25:04.078811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.097 [2024-07-25 15:25:04.078816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.097 [2024-07-25 15:25:04.078820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.097 [2024-07-25 15:25:04.078831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-07-25 15:25:04.088760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.097 [2024-07-25 15:25:04.088843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.097 [2024-07-25 15:25:04.088858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.097 [2024-07-25 15:25:04.088863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.097 [2024-07-25 15:25:04.088867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.097 [2024-07-25 15:25:04.088879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-07-25 15:25:04.098781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.097 [2024-07-25 15:25:04.098872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.097 [2024-07-25 15:25:04.098891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.097 [2024-07-25 15:25:04.098897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.097 [2024-07-25 15:25:04.098902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.098917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.108712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.108798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.108817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.108824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.108828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.108843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.118781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.118860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.118874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.118880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.118884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.118896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.128834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.128915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.128928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.128933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.128937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.128949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.138919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.139000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.139012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.139017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.139022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.139033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.148909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.148998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.149015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.149020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.149024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.149035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.158900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.158983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.159002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.159008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.159012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.159028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.168971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.169054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.169072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.169079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.169083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.169098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.179028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.179127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.179141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.179146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.179150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.179163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.188989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.189080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.189093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.189098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.189102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.189117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.199142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.199234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.199247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.199252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.199256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.199268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.209092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.209172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.209186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.209191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.209195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.209211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.219124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.219208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.098 [2024-07-25 15:25:04.219221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.098 [2024-07-25 15:25:04.219226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.098 [2024-07-25 15:25:04.219231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.098 [2024-07-25 15:25:04.219242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-07-25 15:25:04.229075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.098 [2024-07-25 15:25:04.229164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.099 [2024-07-25 15:25:04.229177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.099 [2024-07-25 15:25:04.229182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.099 [2024-07-25 15:25:04.229187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.099 [2024-07-25 15:25:04.229198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-07-25 15:25:04.239207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.099 [2024-07-25 15:25:04.239286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.099 [2024-07-25 15:25:04.239304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.099 [2024-07-25 15:25:04.239309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.099 [2024-07-25 15:25:04.239313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.099 [2024-07-25 15:25:04.239325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-07-25 15:25:04.249205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.099 [2024-07-25 15:25:04.249338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.099 [2024-07-25 15:25:04.249351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.099 [2024-07-25 15:25:04.249356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.099 [2024-07-25 15:25:04.249360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.099 [2024-07-25 15:25:04.249372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-07-25 15:25:04.259265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.099 [2024-07-25 15:25:04.259350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.099 [2024-07-25 15:25:04.259363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.099 [2024-07-25 15:25:04.259368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.099 [2024-07-25 15:25:04.259373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.099 [2024-07-25 15:25:04.259384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-07-25 15:25:04.269153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.099 [2024-07-25 15:25:04.269242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.099 [2024-07-25 15:25:04.269255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.099 [2024-07-25 15:25:04.269260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.099 [2024-07-25 15:25:04.269264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.099 [2024-07-25 15:25:04.269275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-07-25 15:25:04.279238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.099 [2024-07-25 15:25:04.279319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.099 [2024-07-25 15:25:04.279332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.099 [2024-07-25 15:25:04.279337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.099 [2024-07-25 15:25:04.279344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.099 [2024-07-25 15:25:04.279355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.289350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.289433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.289446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.289451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.289455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.289467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.299244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.299331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.299345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.299350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.299354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.299366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.309398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.309482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.309495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.309500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.309504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.309516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.319487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.319592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.319605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.319610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.319614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.319626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.329468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.329571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.329584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.329589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.329593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.329605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.339515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.339599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.339612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.339617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.339621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.339632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.349544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.349629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.349642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.349647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.349651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.349662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.359565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.359676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.359689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.359694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.359698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.359710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.369537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.369621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.369640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.369650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.369655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.369671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.379622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.379704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.379718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.379723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.362 [2024-07-25 15:25:04.379728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.362 [2024-07-25 15:25:04.379740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.362 qpair failed and we were unable to recover it. 00:29:12.362 [2024-07-25 15:25:04.389620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.362 [2024-07-25 15:25:04.389705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.362 [2024-07-25 15:25:04.389719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.362 [2024-07-25 15:25:04.389724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.389728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.389740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.399683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.399764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.399777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.399782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.399787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.399798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.409568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.409646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.409659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.409665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.409669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.409681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.419712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.419797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.419810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.419815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.419819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.419831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.429781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.429896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.429915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.429921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.429926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.429942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.439671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.439754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.439773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.439780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.439784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.439800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.449812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.449902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.449921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.449927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.449931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.449947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.459848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.459936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.459955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.459965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.459970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.459986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.469865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.469964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.469983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.469990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.469994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.470009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.479780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.479868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.479887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.479894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.479898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.479913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.489927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.490006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.490020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.490025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.490029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.490041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.499974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.500060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.500073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.500079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.500083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.500094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.509966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.363 [2024-07-25 15:25:04.510054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.363 [2024-07-25 15:25:04.510068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.363 [2024-07-25 15:25:04.510073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.363 [2024-07-25 15:25:04.510077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.363 [2024-07-25 15:25:04.510088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.363 qpair failed and we were unable to recover it. 00:29:12.363 [2024-07-25 15:25:04.519938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.364 [2024-07-25 15:25:04.520021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.364 [2024-07-25 15:25:04.520034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.364 [2024-07-25 15:25:04.520039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.364 [2024-07-25 15:25:04.520043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.364 [2024-07-25 15:25:04.520054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.364 qpair failed and we were unable to recover it. 00:29:12.364 [2024-07-25 15:25:04.530047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.364 [2024-07-25 15:25:04.530129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.364 [2024-07-25 15:25:04.530143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.364 [2024-07-25 15:25:04.530147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.364 [2024-07-25 15:25:04.530152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.364 [2024-07-25 15:25:04.530163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.364 qpair failed and we were unable to recover it. 00:29:12.364 [2024-07-25 15:25:04.540080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.364 [2024-07-25 15:25:04.540160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.364 [2024-07-25 15:25:04.540173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.364 [2024-07-25 15:25:04.540179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.364 [2024-07-25 15:25:04.540183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.364 [2024-07-25 15:25:04.540194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.364 qpair failed and we were unable to recover it. 00:29:12.364 [2024-07-25 15:25:04.550095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.364 [2024-07-25 15:25:04.550206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.364 [2024-07-25 15:25:04.550220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.364 [2024-07-25 15:25:04.550225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.364 [2024-07-25 15:25:04.550229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.364 [2024-07-25 15:25:04.550240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.364 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-25 15:25:04.560137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.626 [2024-07-25 15:25:04.560219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.626 [2024-07-25 15:25:04.560233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.626 [2024-07-25 15:25:04.560238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.626 [2024-07-25 15:25:04.560242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.626 [2024-07-25 15:25:04.560254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-25 15:25:04.570177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.626 [2024-07-25 15:25:04.570265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.626 [2024-07-25 15:25:04.570278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.626 [2024-07-25 15:25:04.570283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.626 [2024-07-25 15:25:04.570287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.626 [2024-07-25 15:25:04.570299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-25 15:25:04.580198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.626 [2024-07-25 15:25:04.580314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.626 [2024-07-25 15:25:04.580327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.626 [2024-07-25 15:25:04.580332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.626 [2024-07-25 15:25:04.580336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.626 [2024-07-25 15:25:04.580348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-25 15:25:04.590143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.626 [2024-07-25 15:25:04.590232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.626 [2024-07-25 15:25:04.590245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.626 [2024-07-25 15:25:04.590250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.626 [2024-07-25 15:25:04.590254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.626 [2024-07-25 15:25:04.590270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-25 15:25:04.600244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.600324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.600337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.600342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.600346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.600358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.610258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.610337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.610350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.610355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.610359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.610371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.620216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.620299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.620312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.620317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.620321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.620333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.630382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.630472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.630485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.630491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.630495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.630507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.640360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.640436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.640451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.640457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.640461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.640472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.650345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.650430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.650443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.650448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.650452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.650464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.660401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.660483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.660496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.660501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.660505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.660516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.670432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.670566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.670578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.670584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.670589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.670600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.680464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.680553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.680565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.680570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.680577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.680589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.690469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.690574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.690587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.690592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.690596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.690608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.700510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.627 [2024-07-25 15:25:04.700594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.627 [2024-07-25 15:25:04.700607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.627 [2024-07-25 15:25:04.700612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.627 [2024-07-25 15:25:04.700617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.627 [2024-07-25 15:25:04.700628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-25 15:25:04.710409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.710635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.710649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.710654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.710658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.710669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.720585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.720703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.720715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.720721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.720725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.720736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.730579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.730684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.730697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.730702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.730707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.730718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.740619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.740702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.740715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.740720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.740724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.740735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.750626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.750711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.750724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.750729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.750733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.750745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.760687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.760769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.760789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.760796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.760800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.760815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.770716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.770801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.770820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.770826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.770834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.770850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.780729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.780817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.780836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.780842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.780847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.780862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.790792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.790880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.790896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.790901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.790905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.790918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.800805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.800926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.800940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.800945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.800950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.800962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-25 15:25:04.810803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.628 [2024-07-25 15:25:04.810882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.628 [2024-07-25 15:25:04.810895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.628 [2024-07-25 15:25:04.810900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.628 [2024-07-25 15:25:04.810904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.628 [2024-07-25 15:25:04.810916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 15:25:04.820838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.890 [2024-07-25 15:25:04.820938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.890 [2024-07-25 15:25:04.820952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.890 [2024-07-25 15:25:04.820957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.890 [2024-07-25 15:25:04.820961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.890 [2024-07-25 15:25:04.820973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 15:25:04.830889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.890 [2024-07-25 15:25:04.830977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.890 [2024-07-25 15:25:04.830991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.890 [2024-07-25 15:25:04.830996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.890 [2024-07-25 15:25:04.831000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.890 [2024-07-25 15:25:04.831012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.840910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.840997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.841011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.841016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.841020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.841031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.850968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.851045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.851058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.851063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.851068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.851079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.860985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.861066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.861079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.861087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.861091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.861103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.870994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.871096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.871115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.871120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.871124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.871136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.881031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.881123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.881136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.881141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.881145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.881156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.891047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.891143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.891157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.891163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.891167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.891179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.901095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.901174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.901186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.901192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.901196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.901213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.911098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.911185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.911198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.911208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.911212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.911225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.921147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.921234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.921247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.921252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.921258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.921270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.931094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.931177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.931190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.931195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.931199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.931216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.941213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.941298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.941311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.941316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.941320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.941332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.951236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.951320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.951336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.951341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.891 [2024-07-25 15:25:04.951346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.891 [2024-07-25 15:25:04.951357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 15:25:04.961296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.891 [2024-07-25 15:25:04.961377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.891 [2024-07-25 15:25:04.961390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.891 [2024-07-25 15:25:04.961395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:04.961399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:04.961411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:04.971261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:04.971340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:04.971352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:04.971358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:04.971362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:04.971373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:04.981329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:04.981408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:04.981421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:04.981426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:04.981430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:04.981442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:04.991343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:04.991434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:04.991448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:04.991453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:04.991458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:04.991473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:05.001415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:05.001521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:05.001533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:05.001539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:05.001543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:05.001554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:05.011284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:05.011367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:05.011380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:05.011385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:05.011389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:05.011400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:05.021439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:05.021519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:05.021532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:05.021537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:05.021541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:05.021552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:05.031472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:05.031554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:05.031567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:05.031572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:05.031576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:05.031587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:05.041507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:05.041588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:05.041604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:05.041609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:05.041613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:05.041625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:05.051513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:05.051593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:05.051605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:05.051610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:05.051614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:05.051626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:05.061418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:05.061498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:05.061511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:05.061516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:05.061520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:05.061531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 15:25:05.071589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.892 [2024-07-25 15:25:05.071675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.892 [2024-07-25 15:25:05.071688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.892 [2024-07-25 15:25:05.071693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.892 [2024-07-25 15:25:05.071697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:12.892 [2024-07-25 15:25:05.071708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.892 qpair failed and we were unable to recover it. 00:29:13.154 [2024-07-25 15:25:05.081599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.154 [2024-07-25 15:25:05.081683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.154 [2024-07-25 15:25:05.081696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.081701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.081708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.081719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.091592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.091677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.091696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.091702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.091707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.091722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.101532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.101615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.101630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.101635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.101639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.101652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.111694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.111778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.111792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.111797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.111802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.111814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.121744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.121825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.121838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.121843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.121848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.121859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.131704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.131784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.131797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.131802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.131806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.131818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.141818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.141937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.141949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.141954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.141958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.141970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.151802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.151893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.151906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.151911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.151915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.151926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.161802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.161884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.161896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.161901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.161905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.161917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.171860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.171969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.171982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.171987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.171994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.172006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.181831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.181920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.181933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.181938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.181942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.181953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.191878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.191969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.191988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.191994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.191999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.192014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.201943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.155 [2024-07-25 15:25:05.202027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.155 [2024-07-25 15:25:05.202046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.155 [2024-07-25 15:25:05.202053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.155 [2024-07-25 15:25:05.202057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.155 [2024-07-25 15:25:05.202072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-07-25 15:25:05.211928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.212009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.212024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.212029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.212033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.212045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.221996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.222078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.222092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.222097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.222101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.222113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.232015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.232101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.232114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.232119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.232123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.232135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.242041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.242131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.242144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.242149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.242153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.242164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.252038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.252116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.252130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.252135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.252139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.252151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.262112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.262194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.262210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.262219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.262223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.262235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.272107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.272196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.272211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.272217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.272221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.272232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.282141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.282240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.282253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.282259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.282263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.282274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.292308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.292388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.292400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.292405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.292410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.292422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.302230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.302343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.302356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.302361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.302365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.302377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.312221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.312305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.312319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.312324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.312328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.312340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.322264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.322345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.322358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.322363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.322368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.156 [2024-07-25 15:25:05.322379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-07-25 15:25:05.332303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.156 [2024-07-25 15:25:05.332382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.156 [2024-07-25 15:25:05.332396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.156 [2024-07-25 15:25:05.332401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.156 [2024-07-25 15:25:05.332405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.157 [2024-07-25 15:25:05.332418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-07-25 15:25:05.342194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.157 [2024-07-25 15:25:05.342283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.157 [2024-07-25 15:25:05.342296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.157 [2024-07-25 15:25:05.342301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.157 [2024-07-25 15:25:05.342305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.157 [2024-07-25 15:25:05.342317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.419 [2024-07-25 15:25:05.352299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.419 [2024-07-25 15:25:05.352384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.419 [2024-07-25 15:25:05.352400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.419 [2024-07-25 15:25:05.352405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.419 [2024-07-25 15:25:05.352410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.419 [2024-07-25 15:25:05.352422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.419 qpair failed and we were unable to recover it. 00:29:13.419 [2024-07-25 15:25:05.362366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.419 [2024-07-25 15:25:05.362451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.419 [2024-07-25 15:25:05.362464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.419 [2024-07-25 15:25:05.362469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.419 [2024-07-25 15:25:05.362473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.419 [2024-07-25 15:25:05.362484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.419 qpair failed and we were unable to recover it. 00:29:13.419 [2024-07-25 15:25:05.372430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.419 [2024-07-25 15:25:05.372510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.419 [2024-07-25 15:25:05.372523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.419 [2024-07-25 15:25:05.372528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.419 [2024-07-25 15:25:05.372532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.419 [2024-07-25 15:25:05.372543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.419 qpair failed and we were unable to recover it. 00:29:13.419 [2024-07-25 15:25:05.382442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.419 [2024-07-25 15:25:05.382528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.419 [2024-07-25 15:25:05.382540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.419 [2024-07-25 15:25:05.382545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.419 [2024-07-25 15:25:05.382549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.419 [2024-07-25 15:25:05.382560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.419 qpair failed and we were unable to recover it. 00:29:13.419 [2024-07-25 15:25:05.392438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.419 [2024-07-25 15:25:05.392536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.419 [2024-07-25 15:25:05.392549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.419 [2024-07-25 15:25:05.392554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.419 [2024-07-25 15:25:05.392558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.419 [2024-07-25 15:25:05.392573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.419 qpair failed and we were unable to recover it. 00:29:13.419 [2024-07-25 15:25:05.402483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.419 [2024-07-25 15:25:05.402562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.419 [2024-07-25 15:25:05.402574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.419 [2024-07-25 15:25:05.402579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.419 [2024-07-25 15:25:05.402583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.419 [2024-07-25 15:25:05.402594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.419 qpair failed and we were unable to recover it. 00:29:13.419 [2024-07-25 15:25:05.412520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.419 [2024-07-25 15:25:05.412651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.419 [2024-07-25 15:25:05.412664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.419 [2024-07-25 15:25:05.412669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.419 [2024-07-25 15:25:05.412673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.419 [2024-07-25 15:25:05.412684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.419 qpair failed and we were unable to recover it. 00:29:13.419 [2024-07-25 15:25:05.422527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.419 [2024-07-25 15:25:05.422615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.419 [2024-07-25 15:25:05.422628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.419 [2024-07-25 15:25:05.422633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.419 [2024-07-25 15:25:05.422637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.419 [2024-07-25 15:25:05.422649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.419 qpair failed and we were unable to recover it. 00:29:13.419 [2024-07-25 15:25:05.432567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.419 [2024-07-25 15:25:05.432653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.419 [2024-07-25 15:25:05.432666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.419 [2024-07-25 15:25:05.432671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.432675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.432686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.442619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.442701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.442716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.442722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.442726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.442738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.452817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.452898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.452911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.452916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.452920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.452931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.462558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.462651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.462670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.462677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.462681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.462697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.472675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.472757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.472771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.472777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.472782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.472794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.482704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.482785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.482798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.482803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.482807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.482823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.492722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.492812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.492831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.492837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.492842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.492857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.502819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.502939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.502953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.502957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.502962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.502974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.512821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.512913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.512933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.512939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.512943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.512959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.522833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.522964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.522983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.522990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.522994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.523009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.532871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.532969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.532988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.532994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.532999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.533014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.542884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.542971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.542990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.542996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.543001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.543017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.552915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.420 [2024-07-25 15:25:05.553008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.420 [2024-07-25 15:25:05.553022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.420 [2024-07-25 15:25:05.553027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.420 [2024-07-25 15:25:05.553031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.420 [2024-07-25 15:25:05.553043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.420 qpair failed and we were unable to recover it. 00:29:13.420 [2024-07-25 15:25:05.562946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.421 [2024-07-25 15:25:05.563025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.421 [2024-07-25 15:25:05.563037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.421 [2024-07-25 15:25:05.563043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.421 [2024-07-25 15:25:05.563047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.421 [2024-07-25 15:25:05.563058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.421 qpair failed and we were unable to recover it. 00:29:13.421 [2024-07-25 15:25:05.572911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.421 [2024-07-25 15:25:05.573004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.421 [2024-07-25 15:25:05.573023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.421 [2024-07-25 15:25:05.573029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.421 [2024-07-25 15:25:05.573037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.421 [2024-07-25 15:25:05.573052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.421 qpair failed and we were unable to recover it. 00:29:13.421 [2024-07-25 15:25:05.582892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.421 [2024-07-25 15:25:05.582994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.421 [2024-07-25 15:25:05.583013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.421 [2024-07-25 15:25:05.583020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.421 [2024-07-25 15:25:05.583024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.421 [2024-07-25 15:25:05.583039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.421 qpair failed and we were unable to recover it. 00:29:13.421 [2024-07-25 15:25:05.593079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.421 [2024-07-25 15:25:05.593192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.421 [2024-07-25 15:25:05.593215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.421 [2024-07-25 15:25:05.593221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.421 [2024-07-25 15:25:05.593226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.421 [2024-07-25 15:25:05.593241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.421 qpair failed and we were unable to recover it. 00:29:13.421 [2024-07-25 15:25:05.603061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.421 [2024-07-25 15:25:05.603145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.421 [2024-07-25 15:25:05.603160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.421 [2024-07-25 15:25:05.603165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.421 [2024-07-25 15:25:05.603169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.421 [2024-07-25 15:25:05.603181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.421 qpair failed and we were unable to recover it. 00:29:13.683 [2024-07-25 15:25:05.613066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.683 [2024-07-25 15:25:05.613194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.683 [2024-07-25 15:25:05.613211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.683 [2024-07-25 15:25:05.613216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.683 [2024-07-25 15:25:05.613221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.683 [2024-07-25 15:25:05.613233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.683 qpair failed and we were unable to recover it. 00:29:13.683 [2024-07-25 15:25:05.623085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.683 [2024-07-25 15:25:05.623169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.683 [2024-07-25 15:25:05.623183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.683 [2024-07-25 15:25:05.623188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.683 [2024-07-25 15:25:05.623192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.683 [2024-07-25 15:25:05.623209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.683 qpair failed and we were unable to recover it. 00:29:13.683 [2024-07-25 15:25:05.633195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.683 [2024-07-25 15:25:05.633301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.683 [2024-07-25 15:25:05.633313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.683 [2024-07-25 15:25:05.633318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.683 [2024-07-25 15:25:05.633322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.683 [2024-07-25 15:25:05.633334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.683 qpair failed and we were unable to recover it. 00:29:13.683 [2024-07-25 15:25:05.643189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.683 [2024-07-25 15:25:05.643271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.683 [2024-07-25 15:25:05.643284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.683 [2024-07-25 15:25:05.643289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.683 [2024-07-25 15:25:05.643293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.683 [2024-07-25 15:25:05.643305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.683 qpair failed and we were unable to recover it. 00:29:13.683 [2024-07-25 15:25:05.653150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.653259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.653272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.653277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.653282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.653293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.663221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.663302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.663316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.663327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.663331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.663344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.673237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.673325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.673338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.673343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.673347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.673359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.683279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.683386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.683399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.683404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.683408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.683420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.693332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.693413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.693426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.693431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.693435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.693446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.703301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.703383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.703395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.703401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.703405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.703417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.713392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.713519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.713532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.713538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.713542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.713553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.723372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.723452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.723465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.723470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.723474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.723486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.733423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.733505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.733517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.733522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.733526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.733538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.743462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.743544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.743557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.743562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.743566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.743577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.753469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.753551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.753567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.753572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.753576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.753587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.763517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.763596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.763608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.763613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.763617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.763628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.684 [2024-07-25 15:25:05.773515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.684 [2024-07-25 15:25:05.773600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.684 [2024-07-25 15:25:05.773612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.684 [2024-07-25 15:25:05.773617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.684 [2024-07-25 15:25:05.773621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.684 [2024-07-25 15:25:05.773633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.684 qpair failed and we were unable to recover it. 00:29:13.685 [2024-07-25 15:25:05.783582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.685 [2024-07-25 15:25:05.783661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.685 [2024-07-25 15:25:05.783674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.685 [2024-07-25 15:25:05.783679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.685 [2024-07-25 15:25:05.783683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.685 [2024-07-25 15:25:05.783694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.685 qpair failed and we were unable to recover it. 00:29:13.685 [2024-07-25 15:25:05.793597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.685 [2024-07-25 15:25:05.793680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.685 [2024-07-25 15:25:05.793692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.685 [2024-07-25 15:25:05.793697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.685 [2024-07-25 15:25:05.793701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.685 [2024-07-25 15:25:05.793712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.685 qpair failed and we were unable to recover it. 00:29:13.685 [2024-07-25 15:25:05.803647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.685 [2024-07-25 15:25:05.803730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.685 [2024-07-25 15:25:05.803743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.685 [2024-07-25 15:25:05.803748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.685 [2024-07-25 15:25:05.803752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.685 [2024-07-25 15:25:05.803763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.685 qpair failed and we were unable to recover it. 00:29:13.685 [2024-07-25 15:25:05.813677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.685 [2024-07-25 15:25:05.813802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.685 [2024-07-25 15:25:05.813815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.685 [2024-07-25 15:25:05.813820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.685 [2024-07-25 15:25:05.813825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.685 [2024-07-25 15:25:05.813836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.685 qpair failed and we were unable to recover it. 00:29:13.685 [2024-07-25 15:25:05.823704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.685 [2024-07-25 15:25:05.823789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.685 [2024-07-25 15:25:05.823808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.685 [2024-07-25 15:25:05.823815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.685 [2024-07-25 15:25:05.823819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.685 [2024-07-25 15:25:05.823835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.685 qpair failed and we were unable to recover it. 00:29:13.685 [2024-07-25 15:25:05.833711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.685 [2024-07-25 15:25:05.833808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.685 [2024-07-25 15:25:05.833827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.685 [2024-07-25 15:25:05.833833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.685 [2024-07-25 15:25:05.833838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.685 [2024-07-25 15:25:05.833853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.685 qpair failed and we were unable to recover it. 00:29:13.685 [2024-07-25 15:25:05.843751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.685 [2024-07-25 15:25:05.843836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.685 [2024-07-25 15:25:05.843854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.685 [2024-07-25 15:25:05.843859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.685 [2024-07-25 15:25:05.843863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.685 [2024-07-25 15:25:05.843876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.685 qpair failed and we were unable to recover it. 00:29:13.685 [2024-07-25 15:25:05.853764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.685 [2024-07-25 15:25:05.853851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.685 [2024-07-25 15:25:05.853870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.685 [2024-07-25 15:25:05.853876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.685 [2024-07-25 15:25:05.853880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.685 [2024-07-25 15:25:05.853896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.685 qpair failed and we were unable to recover it. 00:29:13.685 [2024-07-25 15:25:05.863821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.685 [2024-07-25 15:25:05.863909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.685 [2024-07-25 15:25:05.863928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.685 [2024-07-25 15:25:05.863934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.685 [2024-07-25 15:25:05.863939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.685 [2024-07-25 15:25:05.863954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.685 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-25 15:25:05.873844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.948 [2024-07-25 15:25:05.873935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.948 [2024-07-25 15:25:05.873954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.948 [2024-07-25 15:25:05.873960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.948 [2024-07-25 15:25:05.873965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.948 [2024-07-25 15:25:05.873980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-25 15:25:05.883861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.948 [2024-07-25 15:25:05.883943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.948 [2024-07-25 15:25:05.883962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.948 [2024-07-25 15:25:05.883969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.948 [2024-07-25 15:25:05.883973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.948 [2024-07-25 15:25:05.883993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-25 15:25:05.893863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.948 [2024-07-25 15:25:05.893953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.948 [2024-07-25 15:25:05.893972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.948 [2024-07-25 15:25:05.893978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.948 [2024-07-25 15:25:05.893982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.948 [2024-07-25 15:25:05.893998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-25 15:25:05.903886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.948 [2024-07-25 15:25:05.903994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.948 [2024-07-25 15:25:05.904009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.948 [2024-07-25 15:25:05.904014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.948 [2024-07-25 15:25:05.904019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.948 [2024-07-25 15:25:05.904033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-25 15:25:05.913938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.948 [2024-07-25 15:25:05.914025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.948 [2024-07-25 15:25:05.914039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:05.914044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:05.914049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:05.914061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:05.923975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:05.924057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:05.924070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:05.924075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:05.924079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:05.924090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:05.934007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:05.934082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:05.934099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:05.934105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:05.934109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:05.934121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:05.943963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:05.944051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:05.944064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:05.944069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:05.944074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:05.944085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:05.953979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:05.954067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:05.954081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:05.954086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:05.954090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:05.954102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:05.964113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:05.964244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:05.964257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:05.964263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:05.964267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:05.964279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:05.974119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:05.974206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:05.974220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:05.974225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:05.974232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:05.974244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:05.984168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:05.984250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:05.984263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:05.984268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:05.984272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:05.984284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:05.994075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:05.994171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:05.994183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:05.994189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:05.994193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:05.994209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:06.004208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:06.004291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:06.004304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:06.004309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:06.004314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:06.004325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:06.014088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:06.014169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:06.014182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:06.014187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:06.014191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:06.014208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:06.024267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:06.024395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:06.024408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:06.024413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:06.024418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:06.024430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-25 15:25:06.034435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.949 [2024-07-25 15:25:06.034527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.949 [2024-07-25 15:25:06.034540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.949 [2024-07-25 15:25:06.034545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.949 [2024-07-25 15:25:06.034550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.949 [2024-07-25 15:25:06.034561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.044345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.044444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.044457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.044461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.044466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.044476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.054419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.054531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.054544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.054549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.054554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.054565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.064386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.064468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.064481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.064490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.064494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.064505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.074379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.074470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.074482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.074487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.074491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.074503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.084388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.084464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.084477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.084482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.084486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.084497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.094441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.094522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.094535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.094540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.094544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.094555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.104464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.104556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.104570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.104575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.104579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.104591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.114505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.114617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.114630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.114635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.114640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.114651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.124526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.124606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.124619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.124624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.124628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.124639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:13.950 [2024-07-25 15:25:06.134522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.950 [2024-07-25 15:25:06.134602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.950 [2024-07-25 15:25:06.134615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.950 [2024-07-25 15:25:06.134620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.950 [2024-07-25 15:25:06.134624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:13.950 [2024-07-25 15:25:06.134635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.950 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.144436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.144517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.144530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.144535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.144539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.144551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.154589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.154711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.154724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.154732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.154736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.154747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.164630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.164705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.164717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.164723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.164727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.164738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.174685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.174766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.174778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.174783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.174787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.174799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.184684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.184775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.184794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.184801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.184805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.184820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.194653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.194746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.194760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.194765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.194769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.194780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.204629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.204726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.204739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.204745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.204749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.204761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.214780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.214868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.214880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.214886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.214890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.214902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.224803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.224889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.224909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.224915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.224919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.224935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.234764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.234846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.234865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.234871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.234875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.234891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.244847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.214 [2024-07-25 15:25:06.244966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.214 [2024-07-25 15:25:06.244989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.214 [2024-07-25 15:25:06.244996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.214 [2024-07-25 15:25:06.245000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.214 [2024-07-25 15:25:06.245016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.214 qpair failed and we were unable to recover it. 00:29:14.214 [2024-07-25 15:25:06.254885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.254977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.254997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.255003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.255007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.255022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.264891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.264977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.264997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.265003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.265008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.265023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.274751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.274837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.274857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.274863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.274868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.274883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.284927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.285008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.285022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.285028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.285032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.285048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.294944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.295029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.295044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.295049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.295053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.295065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.305014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.305099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.305112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.305117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.305122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.305133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.314985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.315064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.315077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.315082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.315086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.315098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.325081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.325189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.325209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.325215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.325219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.325231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.334951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.335033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.335049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.335054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.335058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.335070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.345054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.345139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.345152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.345157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.345161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.345173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.355108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.355192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.355209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.355215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.355219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.355231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.365227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.365308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.365321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.365326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.215 [2024-07-25 15:25:06.365330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.215 [2024-07-25 15:25:06.365341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.215 qpair failed and we were unable to recover it. 00:29:14.215 [2024-07-25 15:25:06.375162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.215 [2024-07-25 15:25:06.375242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.215 [2024-07-25 15:25:06.375254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.215 [2024-07-25 15:25:06.375259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.216 [2024-07-25 15:25:06.375268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.216 [2024-07-25 15:25:06.375281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.216 qpair failed and we were unable to recover it. 00:29:14.216 [2024-07-25 15:25:06.385176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.216 [2024-07-25 15:25:06.385289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.216 [2024-07-25 15:25:06.385302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.216 [2024-07-25 15:25:06.385307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.216 [2024-07-25 15:25:06.385312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.216 [2024-07-25 15:25:06.385323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.216 qpair failed and we were unable to recover it. 00:29:14.216 [2024-07-25 15:25:06.395212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.216 [2024-07-25 15:25:06.395292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.216 [2024-07-25 15:25:06.395305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.216 [2024-07-25 15:25:06.395310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.216 [2024-07-25 15:25:06.395315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.216 [2024-07-25 15:25:06.395326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.216 qpair failed and we were unable to recover it. 00:29:14.478 [2024-07-25 15:25:06.405264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.478 [2024-07-25 15:25:06.405343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.478 [2024-07-25 15:25:06.405356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.478 [2024-07-25 15:25:06.405361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.478 [2024-07-25 15:25:06.405365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.478 [2024-07-25 15:25:06.405377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-07-25 15:25:06.415236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.478 [2024-07-25 15:25:06.415314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.415327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.415332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.415337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.415348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.425284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.425393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.425406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.425411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.425416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.425427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.435318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.435397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.435410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.435415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.435419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.435431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.445391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.445518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.445531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.445536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.445540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.445551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.455371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.455448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.455460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.455465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.455469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.455481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.465400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.465477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.465490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.465498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.465502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.465514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.475415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.475491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.475504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.475509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.475513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.475524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.485346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.485423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.485436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.485441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.485445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.485456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.495473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.495550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.495563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.495568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.495572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.495584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.505487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.505562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.505575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.505580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.505584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.505595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.515501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.515578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.515591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.515596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.515600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.515611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.525554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.525631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.525644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.525649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.479 [2024-07-25 15:25:06.525653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.479 [2024-07-25 15:25:06.525664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-07-25 15:25:06.535550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.479 [2024-07-25 15:25:06.535621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.479 [2024-07-25 15:25:06.535634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.479 [2024-07-25 15:25:06.535639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.535643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.535655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.545637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.545727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.545739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.545744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.545749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.545760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.555638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.555713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.555725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.555734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.555738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.555749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.565713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.565797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.565809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.565814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.565818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.565830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.575665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.575776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.575795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.575802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.575806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.575822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.585736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.585815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.585829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.585834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.585839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.585851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.595761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.595885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.595898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.595904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.595908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.595920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.605810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.605888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.605907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.605913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.605917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.605933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.615803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.615881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.615901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.615907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.615912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.615926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.625850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.625929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.625948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.625954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.625959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.625974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.635854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.635939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.635958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.635964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.635969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.635984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.645946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.646032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.646054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.646060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.646064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.646078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.655882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.480 [2024-07-25 15:25:06.655961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.480 [2024-07-25 15:25:06.655976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.480 [2024-07-25 15:25:06.655981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.480 [2024-07-25 15:25:06.655986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.480 [2024-07-25 15:25:06.655998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-07-25 15:25:06.665991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.481 [2024-07-25 15:25:06.666070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.481 [2024-07-25 15:25:06.666084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.481 [2024-07-25 15:25:06.666089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.481 [2024-07-25 15:25:06.666093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.481 [2024-07-25 15:25:06.666105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 15:25:06.675972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.743 [2024-07-25 15:25:06.676066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.676078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.676084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.676088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.676099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.686026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.686108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.686120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.686125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.686130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.686145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.696037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.696168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.696181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.696186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.696191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.696209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.706073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.706152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.706165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.706171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.706175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.706187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.716049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.716130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.716144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.716149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.716153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.716164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.726143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.726221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.726234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.726239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.726244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.726255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.736127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.736205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.736221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.736226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.736230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.736242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.746155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.746237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.746251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.746256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.746260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.746272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.756184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.756266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.756278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.756284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.756288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.756299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.766249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.766327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.766340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.766345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.766349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.766361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.776226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.776302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.776315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.776320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.776327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.776338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.786236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.786309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.786322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.786327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.786331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.786342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 15:25:06.796272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.744 [2024-07-25 15:25:06.796348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.744 [2024-07-25 15:25:06.796361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.744 [2024-07-25 15:25:06.796365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.744 [2024-07-25 15:25:06.796370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.744 [2024-07-25 15:25:06.796381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.806323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.806397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.806409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.806414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.806418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.806429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.816346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.816420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.816433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.816438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.816442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.816453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.826317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.826396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.826409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.826414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.826418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.826429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.836407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.836487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.836500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.836505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.836509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.836520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.846434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.846521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.846533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.846538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.846543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.846554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.856315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.856438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.856451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.856456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.856460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.856472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.866520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.866603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.866615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.866620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.866627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.866639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.876485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.876566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.876578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.876583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.876588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.876599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.886584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.886659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.886671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.886676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.886680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.886691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.896541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.896612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.896625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.896630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.896634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.896645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.906570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.906649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.906662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.906667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.745 [2024-07-25 15:25:06.906671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.745 [2024-07-25 15:25:06.906683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 15:25:06.916514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.745 [2024-07-25 15:25:06.916614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.745 [2024-07-25 15:25:06.916627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.745 [2024-07-25 15:25:06.916632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.746 [2024-07-25 15:25:06.916637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.746 [2024-07-25 15:25:06.916649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 15:25:06.926675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.746 [2024-07-25 15:25:06.926752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.746 [2024-07-25 15:25:06.926765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.746 [2024-07-25 15:25:06.926770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.746 [2024-07-25 15:25:06.926774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:14.746 [2024-07-25 15:25:06.926786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.746 qpair failed and we were unable to recover it. 00:29:15.007 [2024-07-25 15:25:06.936653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.007 [2024-07-25 15:25:06.936731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.007 [2024-07-25 15:25:06.936745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.007 [2024-07-25 15:25:06.936750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.007 [2024-07-25 15:25:06.936754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.007 [2024-07-25 15:25:06.936766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-07-25 15:25:06.946695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.007 [2024-07-25 15:25:06.946821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.007 [2024-07-25 15:25:06.946840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.007 [2024-07-25 15:25:06.946846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.007 [2024-07-25 15:25:06.946850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.007 [2024-07-25 15:25:06.946866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-07-25 15:25:06.956742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.007 [2024-07-25 15:25:06.956828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.007 [2024-07-25 15:25:06.956842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.007 [2024-07-25 15:25:06.956852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.007 [2024-07-25 15:25:06.956856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.007 [2024-07-25 15:25:06.956868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-07-25 15:25:06.966820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.007 [2024-07-25 15:25:06.966911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.007 [2024-07-25 15:25:06.966930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.007 [2024-07-25 15:25:06.966937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.007 [2024-07-25 15:25:06.966941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.007 [2024-07-25 15:25:06.966956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-07-25 15:25:06.976785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.007 [2024-07-25 15:25:06.976868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.007 [2024-07-25 15:25:06.976887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.007 [2024-07-25 15:25:06.976893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.007 [2024-07-25 15:25:06.976898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:06.976913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:06.986751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:06.986851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:06.986871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:06.986877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:06.986881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:06.986897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:06.996868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:06.996995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:06.997015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:06.997021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:06.997025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:06.997040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.006911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.007003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.007021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.007028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.007032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.007048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.016861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.016938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.016957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.016963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.016967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.016983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.026936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.027049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.027068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.027075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.027080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.027095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.036948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.037036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.037049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.037055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.037059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.037071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.046926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.047023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.047042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.047048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.047052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.047064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.056992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.057065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.057077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.057083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.057087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.057098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.067026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.067099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.067112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.067117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.067121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.067133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.077049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.077126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.077139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.077144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.077148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.077159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.087178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.087259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.087272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.087277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.087281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.087296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-07-25 15:25:07.097103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.008 [2024-07-25 15:25:07.097177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.008 [2024-07-25 15:25:07.097190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.008 [2024-07-25 15:25:07.097196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.008 [2024-07-25 15:25:07.097203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.008 [2024-07-25 15:25:07.097215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.009 [2024-07-25 15:25:07.107128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.009 [2024-07-25 15:25:07.107207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.009 [2024-07-25 15:25:07.107220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.009 [2024-07-25 15:25:07.107225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.009 [2024-07-25 15:25:07.107229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.009 [2024-07-25 15:25:07.107241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-07-25 15:25:07.117219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.009 [2024-07-25 15:25:07.117345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.009 [2024-07-25 15:25:07.117357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.009 [2024-07-25 15:25:07.117362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.009 [2024-07-25 15:25:07.117366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.009 [2024-07-25 15:25:07.117377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-07-25 15:25:07.127248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.009 [2024-07-25 15:25:07.127342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.009 [2024-07-25 15:25:07.127356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.009 [2024-07-25 15:25:07.127361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.009 [2024-07-25 15:25:07.127365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.009 [2024-07-25 15:25:07.127377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-07-25 15:25:07.137220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.009 [2024-07-25 15:25:07.137292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.009 [2024-07-25 15:25:07.137308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.009 [2024-07-25 15:25:07.137313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.009 [2024-07-25 15:25:07.137317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.009 [2024-07-25 15:25:07.137329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-07-25 15:25:07.147314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.009 [2024-07-25 15:25:07.147521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.009 [2024-07-25 15:25:07.147534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.009 [2024-07-25 15:25:07.147539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.009 [2024-07-25 15:25:07.147543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.009 [2024-07-25 15:25:07.147554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-07-25 15:25:07.157295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.009 [2024-07-25 15:25:07.157374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.009 [2024-07-25 15:25:07.157386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.009 [2024-07-25 15:25:07.157391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.009 [2024-07-25 15:25:07.157395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.009 [2024-07-25 15:25:07.157407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-07-25 15:25:07.167327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.009 [2024-07-25 15:25:07.167438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.009 [2024-07-25 15:25:07.167451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.009 [2024-07-25 15:25:07.167456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.009 [2024-07-25 15:25:07.167461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.009 [2024-07-25 15:25:07.167472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-07-25 15:25:07.177343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.009 [2024-07-25 15:25:07.177433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.009 [2024-07-25 15:25:07.177445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.009 [2024-07-25 15:25:07.177450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.009 [2024-07-25 15:25:07.177454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.009 [2024-07-25 15:25:07.177470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-07-25 15:25:07.187357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.009 [2024-07-25 15:25:07.187434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.009 [2024-07-25 15:25:07.187447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.009 [2024-07-25 15:25:07.187452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.009 [2024-07-25 15:25:07.187456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.009 [2024-07-25 15:25:07.187468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-25 15:25:07.197281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.272 [2024-07-25 15:25:07.197383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.272 [2024-07-25 15:25:07.197396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.272 [2024-07-25 15:25:07.197402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.272 [2024-07-25 15:25:07.197406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.272 [2024-07-25 15:25:07.197418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-25 15:25:07.207384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.272 [2024-07-25 15:25:07.207463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.272 [2024-07-25 15:25:07.207476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.272 [2024-07-25 15:25:07.207481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.272 [2024-07-25 15:25:07.207486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.272 [2024-07-25 15:25:07.207497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-25 15:25:07.217424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.272 [2024-07-25 15:25:07.217497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.272 [2024-07-25 15:25:07.217510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.272 [2024-07-25 15:25:07.217515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.272 [2024-07-25 15:25:07.217519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.272 [2024-07-25 15:25:07.217531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-25 15:25:07.227460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.272 [2024-07-25 15:25:07.227538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.272 [2024-07-25 15:25:07.227550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.272 [2024-07-25 15:25:07.227556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.272 [2024-07-25 15:25:07.227560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.272 [2024-07-25 15:25:07.227571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-25 15:25:07.237477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.272 [2024-07-25 15:25:07.237552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.272 [2024-07-25 15:25:07.237564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.272 [2024-07-25 15:25:07.237569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.272 [2024-07-25 15:25:07.237574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.272 [2024-07-25 15:25:07.237585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-25 15:25:07.247532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.272 [2024-07-25 15:25:07.247607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.272 [2024-07-25 15:25:07.247620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.272 [2024-07-25 15:25:07.247625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.272 [2024-07-25 15:25:07.247629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.272 [2024-07-25 15:25:07.247641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-25 15:25:07.257540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.272 [2024-07-25 15:25:07.257617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.272 [2024-07-25 15:25:07.257630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.272 [2024-07-25 15:25:07.257635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.272 [2024-07-25 15:25:07.257639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.272 [2024-07-25 15:25:07.257650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-25 15:25:07.267562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.272 [2024-07-25 15:25:07.267639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.272 [2024-07-25 15:25:07.267651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.272 [2024-07-25 15:25:07.267656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.272 [2024-07-25 15:25:07.267663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.272 [2024-07-25 15:25:07.267675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-25 15:25:07.277650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.272 [2024-07-25 15:25:07.277771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.272 [2024-07-25 15:25:07.277784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.272 [2024-07-25 15:25:07.277789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.272 [2024-07-25 15:25:07.277793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.277804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.287650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.287733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.287752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.287758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.287763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.287778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.297696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.297775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.297789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.297794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.297798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.297810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.307659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.307731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.307746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.307751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.307755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.307767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.317700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.317776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.317789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.317794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.317799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.317810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.327796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.327872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.327886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.327891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.327895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.327907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.337766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.337836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.337850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.337855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.337859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.337871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.347811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.347885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.347899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.347904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.347908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.347920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.357752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.357828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.357842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.357850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.357854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.357866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.367869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.367953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.367971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.367978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.367982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.367998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.377873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.377946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.377960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.377965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.377969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.377982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.387893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.387972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.387991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.387997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.388002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.388017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-25 15:25:07.397815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.273 [2024-07-25 15:25:07.397899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.273 [2024-07-25 15:25:07.397913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.273 [2024-07-25 15:25:07.397918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.273 [2024-07-25 15:25:07.397923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.273 [2024-07-25 15:25:07.397936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-25 15:25:07.408009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.274 [2024-07-25 15:25:07.408087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.274 [2024-07-25 15:25:07.408100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.274 [2024-07-25 15:25:07.408106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.274 [2024-07-25 15:25:07.408110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.274 [2024-07-25 15:25:07.408122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-25 15:25:07.418054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.274 [2024-07-25 15:25:07.418138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.274 [2024-07-25 15:25:07.418151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.274 [2024-07-25 15:25:07.418160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.274 [2024-07-25 15:25:07.418164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.274 [2024-07-25 15:25:07.418176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-25 15:25:07.428028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.274 [2024-07-25 15:25:07.428107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.274 [2024-07-25 15:25:07.428120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.274 [2024-07-25 15:25:07.428125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.274 [2024-07-25 15:25:07.428130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.274 [2024-07-25 15:25:07.428141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-25 15:25:07.437965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.274 [2024-07-25 15:25:07.438088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.274 [2024-07-25 15:25:07.438101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.274 [2024-07-25 15:25:07.438106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.274 [2024-07-25 15:25:07.438111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.274 [2024-07-25 15:25:07.438122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-25 15:25:07.448097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.274 [2024-07-25 15:25:07.448173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.274 [2024-07-25 15:25:07.448189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.274 [2024-07-25 15:25:07.448194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.274 [2024-07-25 15:25:07.448198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.274 [2024-07-25 15:25:07.448214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-25 15:25:07.458074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.274 [2024-07-25 15:25:07.458148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.274 [2024-07-25 15:25:07.458160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.274 [2024-07-25 15:25:07.458165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.274 [2024-07-25 15:25:07.458169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.274 [2024-07-25 15:25:07.458181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.468103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.468212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.468225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.468231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.468235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.468247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.478313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.478392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.478405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.478410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.478414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.478425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.488139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.488240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.488253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.488258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.488262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.488273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.498207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.498282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.498295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.498300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.498304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.498315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.508194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.508272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.508285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.508290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.508294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.508305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.518267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.518349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.518362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.518367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.518371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.518383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.528346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.528450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.528462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.528468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.528472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.528483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.538335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.538406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.538422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.538427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.538432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.538443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.548342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.548416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.548429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.548434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.548438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.548449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.558255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.558330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.558342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.558347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.558351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.537 [2024-07-25 15:25:07.558363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.537 qpair failed and we were unable to recover it. 00:29:15.537 [2024-07-25 15:25:07.568309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.537 [2024-07-25 15:25:07.568383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.537 [2024-07-25 15:25:07.568396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.537 [2024-07-25 15:25:07.568401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.537 [2024-07-25 15:25:07.568405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.568416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.578443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.578554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.578567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.578572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.578576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.578590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.588518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.588630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.588643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.588648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.588652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.588663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.598491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.598578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.598591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.598596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.598601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.598612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.608671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.608755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.608768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.608773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.608777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.608789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.618543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.618620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.618633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.618638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.618642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.618654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.628565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.628639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.628654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.628660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.628664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.628676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.638619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.638697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.638709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.638714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.638718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.638729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.648662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.648751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.648763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.648768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.648772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.648783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.658520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.658591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.658603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.658609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.658613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.658624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.668669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.668746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.668759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.668764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.668771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.668783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.678724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.678801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.678814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.678819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.678823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.678834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.688775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.688856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.538 [2024-07-25 15:25:07.688875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.538 [2024-07-25 15:25:07.688881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.538 [2024-07-25 15:25:07.688886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.538 [2024-07-25 15:25:07.688901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.538 qpair failed and we were unable to recover it. 00:29:15.538 [2024-07-25 15:25:07.698640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.538 [2024-07-25 15:25:07.698716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.539 [2024-07-25 15:25:07.698735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.539 [2024-07-25 15:25:07.698741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.539 [2024-07-25 15:25:07.698746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.539 [2024-07-25 15:25:07.698761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.539 qpair failed and we were unable to recover it. 00:29:15.539 [2024-07-25 15:25:07.708784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.539 [2024-07-25 15:25:07.708857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.539 [2024-07-25 15:25:07.708871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.539 [2024-07-25 15:25:07.708876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.539 [2024-07-25 15:25:07.708881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.539 [2024-07-25 15:25:07.708893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.539 qpair failed and we were unable to recover it. 00:29:15.539 [2024-07-25 15:25:07.718785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.539 [2024-07-25 15:25:07.718873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.539 [2024-07-25 15:25:07.718892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.539 [2024-07-25 15:25:07.718899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.539 [2024-07-25 15:25:07.718903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.539 [2024-07-25 15:25:07.718918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.539 qpair failed and we were unable to recover it. 00:29:15.802 [2024-07-25 15:25:07.728882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.802 [2024-07-25 15:25:07.729016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.802 [2024-07-25 15:25:07.729035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.802 [2024-07-25 15:25:07.729042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.802 [2024-07-25 15:25:07.729046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.802 [2024-07-25 15:25:07.729062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.802 qpair failed and we were unable to recover it. 00:29:15.802 [2024-07-25 15:25:07.738818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.802 [2024-07-25 15:25:07.738894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.802 [2024-07-25 15:25:07.738909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.802 [2024-07-25 15:25:07.738915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.802 [2024-07-25 15:25:07.738919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.802 [2024-07-25 15:25:07.738932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.802 qpair failed and we were unable to recover it. 00:29:15.802 [2024-07-25 15:25:07.748887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.802 [2024-07-25 15:25:07.748966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.802 [2024-07-25 15:25:07.748985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.802 [2024-07-25 15:25:07.748991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.802 [2024-07-25 15:25:07.748996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.802 [2024-07-25 15:25:07.749011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.802 qpair failed and we were unable to recover it. 00:29:15.802 [2024-07-25 15:25:07.758890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.802 [2024-07-25 15:25:07.758976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.802 [2024-07-25 15:25:07.758995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.802 [2024-07-25 15:25:07.759005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.802 [2024-07-25 15:25:07.759010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.802 [2024-07-25 15:25:07.759026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.802 qpair failed and we were unable to recover it. 00:29:15.802 [2024-07-25 15:25:07.769018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.802 [2024-07-25 15:25:07.769145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.802 [2024-07-25 15:25:07.769164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.802 [2024-07-25 15:25:07.769171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.769175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.769191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.778938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.779015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.779029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.779035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.779040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.779052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.788977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.789053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.789067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.789072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.789076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.789088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.799055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.799151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.799164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.799169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.799173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.799184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.809107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.809186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.809199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.809208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.809213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.809225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.819076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.819149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.819162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.819168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.819172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.819183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.829164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.829243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.829256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.829262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.829266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.829277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.839149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.839238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.839251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.839256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.839260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.839272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.849212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.849288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.849300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.849309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.849313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.849324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.859187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.859261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.859275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.859280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.859284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.859295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.869214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.869290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.869303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.869308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.869312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.869324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.879264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.879342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.879355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.879360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.879364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.879375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.889310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.889404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.803 [2024-07-25 15:25:07.889417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.803 [2024-07-25 15:25:07.889422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.803 [2024-07-25 15:25:07.889426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.803 [2024-07-25 15:25:07.889438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.803 qpair failed and we were unable to recover it. 00:29:15.803 [2024-07-25 15:25:07.899176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.803 [2024-07-25 15:25:07.899251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.899264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.899269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.899273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.899285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:15.804 [2024-07-25 15:25:07.909354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.804 [2024-07-25 15:25:07.909430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.909443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.909448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.909452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.909465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:15.804 [2024-07-25 15:25:07.919412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.804 [2024-07-25 15:25:07.919490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.919503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.919508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.919513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.919524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:15.804 [2024-07-25 15:25:07.929419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.804 [2024-07-25 15:25:07.929495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.929508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.929513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.929517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.929528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:15.804 [2024-07-25 15:25:07.939437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.804 [2024-07-25 15:25:07.939520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.939535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.939540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.939545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.939556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:15.804 [2024-07-25 15:25:07.949469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.804 [2024-07-25 15:25:07.949543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.949556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.949561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.949565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.949577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:15.804 [2024-07-25 15:25:07.959493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.804 [2024-07-25 15:25:07.959570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.959583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.959588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.959592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.959604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:15.804 [2024-07-25 15:25:07.969548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.804 [2024-07-25 15:25:07.969619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.969632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.969637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.969641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.969653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:15.804 [2024-07-25 15:25:07.979524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.804 [2024-07-25 15:25:07.979634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.979646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.979652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.979656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.979671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:15.804 [2024-07-25 15:25:07.989543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.804 [2024-07-25 15:25:07.989618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.804 [2024-07-25 15:25:07.989631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.804 [2024-07-25 15:25:07.989636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.804 [2024-07-25 15:25:07.989640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:15.804 [2024-07-25 15:25:07.989651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.804 qpair failed and we were unable to recover it. 00:29:16.067 [2024-07-25 15:25:07.999565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.067 [2024-07-25 15:25:07.999638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.067 [2024-07-25 15:25:07.999651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.067 [2024-07-25 15:25:07.999656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.067 [2024-07-25 15:25:07.999660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.067 [2024-07-25 15:25:07.999671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-07-25 15:25:08.009517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.067 [2024-07-25 15:25:08.009595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.067 [2024-07-25 15:25:08.009608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.067 [2024-07-25 15:25:08.009613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.067 [2024-07-25 15:25:08.009617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.067 [2024-07-25 15:25:08.009629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-07-25 15:25:08.019609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.067 [2024-07-25 15:25:08.019690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.067 [2024-07-25 15:25:08.019704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.067 [2024-07-25 15:25:08.019709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.067 [2024-07-25 15:25:08.019713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.067 [2024-07-25 15:25:08.019724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-07-25 15:25:08.029714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.067 [2024-07-25 15:25:08.029821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.067 [2024-07-25 15:25:08.029836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.067 [2024-07-25 15:25:08.029842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.067 [2024-07-25 15:25:08.029846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.067 [2024-07-25 15:25:08.029857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-07-25 15:25:08.039694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.039775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.039787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.039792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.039796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.039808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.049754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.049839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.049851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.049856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.049860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.049870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.059745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.059828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.059847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.059854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.059858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.059873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.069642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.069722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.069735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.069740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.069748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.069761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.079669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.079747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.079760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.079765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.079770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.079781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.089892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.089978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.089997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.090004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.090008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.090023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.099820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.099898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.099912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.099918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.099922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.099935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.109848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.109926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.109939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.109944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.109949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.109960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.119932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.120022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.120041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.120048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.120052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.120067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.129957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.130037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.130052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.130057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.130061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.130074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.139955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.140048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.140062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.140067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.140071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.140082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.149863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.149937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.149950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.149955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.149959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.149970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-07-25 15:25:08.159960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.068 [2024-07-25 15:25:08.160041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.068 [2024-07-25 15:25:08.160054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.068 [2024-07-25 15:25:08.160063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.068 [2024-07-25 15:25:08.160068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.068 [2024-07-25 15:25:08.160080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-07-25 15:25:08.170058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.069 [2024-07-25 15:25:08.170145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.069 [2024-07-25 15:25:08.170158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.069 [2024-07-25 15:25:08.170163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.069 [2024-07-25 15:25:08.170167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.069 [2024-07-25 15:25:08.170178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-07-25 15:25:08.180094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.069 [2024-07-25 15:25:08.180170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.069 [2024-07-25 15:25:08.180183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.069 [2024-07-25 15:25:08.180188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.069 [2024-07-25 15:25:08.180193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.069 [2024-07-25 15:25:08.180207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-07-25 15:25:08.190072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.069 [2024-07-25 15:25:08.190148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.069 [2024-07-25 15:25:08.190160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.069 [2024-07-25 15:25:08.190166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.069 [2024-07-25 15:25:08.190170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.069 [2024-07-25 15:25:08.190181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-07-25 15:25:08.199996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.069 [2024-07-25 15:25:08.200073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.069 [2024-07-25 15:25:08.200086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.069 [2024-07-25 15:25:08.200091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.069 [2024-07-25 15:25:08.200095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.069 [2024-07-25 15:25:08.200106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-07-25 15:25:08.210209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.069 [2024-07-25 15:25:08.210454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.069 [2024-07-25 15:25:08.210467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.069 [2024-07-25 15:25:08.210472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.069 [2024-07-25 15:25:08.210476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.069 [2024-07-25 15:25:08.210488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-07-25 15:25:08.220159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.069 [2024-07-25 15:25:08.220236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.069 [2024-07-25 15:25:08.220249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.069 [2024-07-25 15:25:08.220254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.069 [2024-07-25 15:25:08.220258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.069 [2024-07-25 15:25:08.220270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-07-25 15:25:08.230244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.069 [2024-07-25 15:25:08.230368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.069 [2024-07-25 15:25:08.230381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.069 [2024-07-25 15:25:08.230387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.069 [2024-07-25 15:25:08.230391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.069 [2024-07-25 15:25:08.230402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-07-25 15:25:08.240212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.069 [2024-07-25 15:25:08.240317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.069 [2024-07-25 15:25:08.240330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.069 [2024-07-25 15:25:08.240335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.069 [2024-07-25 15:25:08.240340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.069 [2024-07-25 15:25:08.240352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-07-25 15:25:08.250267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.069 [2024-07-25 15:25:08.250346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.069 [2024-07-25 15:25:08.250359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.069 [2024-07-25 15:25:08.250367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.069 [2024-07-25 15:25:08.250372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.069 [2024-07-25 15:25:08.250383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.332 [2024-07-25 15:25:08.260286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.332 [2024-07-25 15:25:08.260364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.332 [2024-07-25 15:25:08.260377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.332 [2024-07-25 15:25:08.260382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.332 [2024-07-25 15:25:08.260386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.332 [2024-07-25 15:25:08.260398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.332 qpair failed and we were unable to recover it. 00:29:16.332 [2024-07-25 15:25:08.270292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.332 [2024-07-25 15:25:08.270366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.332 [2024-07-25 15:25:08.270379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.332 [2024-07-25 15:25:08.270384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.332 [2024-07-25 15:25:08.270388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.332 [2024-07-25 15:25:08.270399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.332 qpair failed and we were unable to recover it. 00:29:16.332 [2024-07-25 15:25:08.280325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.332 [2024-07-25 15:25:08.280430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.332 [2024-07-25 15:25:08.280443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.332 [2024-07-25 15:25:08.280448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.332 [2024-07-25 15:25:08.280452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.332 [2024-07-25 15:25:08.280463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.332 qpair failed and we were unable to recover it. 00:29:16.332 [2024-07-25 15:25:08.290400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.332 [2024-07-25 15:25:08.290479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.332 [2024-07-25 15:25:08.290492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.332 [2024-07-25 15:25:08.290497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.332 [2024-07-25 15:25:08.290501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.332 [2024-07-25 15:25:08.290512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.332 qpair failed and we were unable to recover it. 00:29:16.332 [2024-07-25 15:25:08.300357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.332 [2024-07-25 15:25:08.300464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.332 [2024-07-25 15:25:08.300478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.332 [2024-07-25 15:25:08.300484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.332 [2024-07-25 15:25:08.300488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.332 [2024-07-25 15:25:08.300500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.332 qpair failed and we were unable to recover it. 00:29:16.332 [2024-07-25 15:25:08.310396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.332 [2024-07-25 15:25:08.310473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.332 [2024-07-25 15:25:08.310486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.332 [2024-07-25 15:25:08.310492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.332 [2024-07-25 15:25:08.310496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.332 [2024-07-25 15:25:08.310508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.332 qpair failed and we were unable to recover it. 00:29:16.332 [2024-07-25 15:25:08.320475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.332 [2024-07-25 15:25:08.320553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.332 [2024-07-25 15:25:08.320565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.320571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.320575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.320586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.330513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.330590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.330603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.330608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.330612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.330623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.340485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.340562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.340578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.340584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.340588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.340601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.350503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.350578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.350591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.350596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.350601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.350612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.360415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.360491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.360503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.360508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.360512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.360524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.370621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.370699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.370712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.370717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.370721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.370733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.380585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.380661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.380674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.380679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.380683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.380698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.390636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.390709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.390722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.390727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.390731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.390742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.400656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.400741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.400754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.400759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.400763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.400774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.410557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.410632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.410644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.410649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.410653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.410665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.420711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.420787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.420800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.420805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.420810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.420821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.430713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.430792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.430815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.430821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.430826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.430841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.440780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.440858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.333 [2024-07-25 15:25:08.440874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.333 [2024-07-25 15:25:08.440879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.333 [2024-07-25 15:25:08.440883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.333 [2024-07-25 15:25:08.440896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.333 qpair failed and we were unable to recover it. 00:29:16.333 [2024-07-25 15:25:08.450807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.333 [2024-07-25 15:25:08.450924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.334 [2024-07-25 15:25:08.450938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.334 [2024-07-25 15:25:08.450943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.334 [2024-07-25 15:25:08.450947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.334 [2024-07-25 15:25:08.450960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.334 qpair failed and we were unable to recover it. 00:29:16.334 [2024-07-25 15:25:08.460689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.334 [2024-07-25 15:25:08.460761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.334 [2024-07-25 15:25:08.460775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.334 [2024-07-25 15:25:08.460780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.334 [2024-07-25 15:25:08.460784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.334 [2024-07-25 15:25:08.460796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.334 qpair failed and we were unable to recover it. 00:29:16.334 [2024-07-25 15:25:08.470817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.334 [2024-07-25 15:25:08.470891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.334 [2024-07-25 15:25:08.470903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.334 [2024-07-25 15:25:08.470908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.334 [2024-07-25 15:25:08.470916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.334 [2024-07-25 15:25:08.470927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.334 qpair failed and we were unable to recover it. 00:29:16.334 [2024-07-25 15:25:08.480940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.334 [2024-07-25 15:25:08.481013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.334 [2024-07-25 15:25:08.481026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.334 [2024-07-25 15:25:08.481031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.334 [2024-07-25 15:25:08.481035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.334 [2024-07-25 15:25:08.481047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.334 qpair failed and we were unable to recover it. 00:29:16.334 [2024-07-25 15:25:08.490856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.334 [2024-07-25 15:25:08.490930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.334 [2024-07-25 15:25:08.490941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.334 [2024-07-25 15:25:08.490945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.334 [2024-07-25 15:25:08.490949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.334 [2024-07-25 15:25:08.490959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.334 qpair failed and we were unable to recover it. 00:29:16.334 [2024-07-25 15:25:08.500971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.334 [2024-07-25 15:25:08.501092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.334 [2024-07-25 15:25:08.501106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.334 [2024-07-25 15:25:08.501111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.334 [2024-07-25 15:25:08.501115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.334 [2024-07-25 15:25:08.501127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.334 qpair failed and we were unable to recover it. 00:29:16.334 [2024-07-25 15:25:08.510968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.334 [2024-07-25 15:25:08.511052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.334 [2024-07-25 15:25:08.511065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.334 [2024-07-25 15:25:08.511070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.334 [2024-07-25 15:25:08.511074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.334 [2024-07-25 15:25:08.511085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.334 qpair failed and we were unable to recover it. 00:29:16.334 [2024-07-25 15:25:08.520996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.334 [2024-07-25 15:25:08.521082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.334 [2024-07-25 15:25:08.521095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.334 [2024-07-25 15:25:08.521100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.334 [2024-07-25 15:25:08.521104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.334 [2024-07-25 15:25:08.521115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.334 qpair failed and we were unable to recover it. 00:29:16.597 [2024-07-25 15:25:08.530957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.531065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.531078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.531083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.531087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.531100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.540971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.541049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.541061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.541066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.541071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.541082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.551035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.551109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.551122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.551127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.551131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.551142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.561144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.561257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.561271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.561277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.561288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.561300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.571123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.571195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.571211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.571216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.571221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.571232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.581158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.581263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.581276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.581281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.581285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.581296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.591151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.591231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.591244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.591249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.591253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.591265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.601210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.601288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.601301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.601306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.601310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.601322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.611224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.611293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.611305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.611311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.611315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.611326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.621248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.621318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.621331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.621336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.621340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.621351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.631288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.631394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.631407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.631412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.631416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.631427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.641295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.641380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.641393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.598 [2024-07-25 15:25:08.641398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.598 [2024-07-25 15:25:08.641402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.598 [2024-07-25 15:25:08.641413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.598 qpair failed and we were unable to recover it. 00:29:16.598 [2024-07-25 15:25:08.651353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.598 [2024-07-25 15:25:08.651429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.598 [2024-07-25 15:25:08.651441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.651449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.651453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.651465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.661396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.661472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.661485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.661490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.661494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.661506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.671387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.671464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.671477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.671482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.671486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.671497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.681423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.681502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.681515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.681520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.681524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.681535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.691430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.691499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.691512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.691517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.691521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.691533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.701488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.701562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.701576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.701581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.701585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.701597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.711651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.711728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.711741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.711746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.711750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.711762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.721524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.721605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.721618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.721623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.721627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.721639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.731412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.731493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.731505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.731510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.731514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.731525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.741534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.741609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.741624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.741630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.741634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.741645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.751591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.751667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.751680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.751685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.751689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.751700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.761588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.761669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.761680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.599 [2024-07-25 15:25:08.761686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.599 [2024-07-25 15:25:08.761690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.599 [2024-07-25 15:25:08.761701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.599 qpair failed and we were unable to recover it. 00:29:16.599 [2024-07-25 15:25:08.771624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.599 [2024-07-25 15:25:08.771702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.599 [2024-07-25 15:25:08.771714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.600 [2024-07-25 15:25:08.771719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.600 [2024-07-25 15:25:08.771723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.600 [2024-07-25 15:25:08.771734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.600 qpair failed and we were unable to recover it. 00:29:16.600 [2024-07-25 15:25:08.781531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.600 [2024-07-25 15:25:08.781606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.600 [2024-07-25 15:25:08.781618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.600 [2024-07-25 15:25:08.781623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.600 [2024-07-25 15:25:08.781627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.600 [2024-07-25 15:25:08.781641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.600 qpair failed and we were unable to recover it. 00:29:16.863 [2024-07-25 15:25:08.791645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.863 [2024-07-25 15:25:08.791725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.863 [2024-07-25 15:25:08.791738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.863 [2024-07-25 15:25:08.791743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.863 [2024-07-25 15:25:08.791747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.863 [2024-07-25 15:25:08.791758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.863 qpair failed and we were unable to recover it. 00:29:16.863 [2024-07-25 15:25:08.801742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.863 [2024-07-25 15:25:08.801818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.863 [2024-07-25 15:25:08.801831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.863 [2024-07-25 15:25:08.801836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.863 [2024-07-25 15:25:08.801840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.863 [2024-07-25 15:25:08.801851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.863 qpair failed and we were unable to recover it. 00:29:16.863 [2024-07-25 15:25:08.811763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.863 [2024-07-25 15:25:08.811885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.863 [2024-07-25 15:25:08.811897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.863 [2024-07-25 15:25:08.811902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.863 [2024-07-25 15:25:08.811906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.863 [2024-07-25 15:25:08.811917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.863 qpair failed and we were unable to recover it. 00:29:16.863 [2024-07-25 15:25:08.821790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.863 [2024-07-25 15:25:08.821864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.863 [2024-07-25 15:25:08.821877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.863 [2024-07-25 15:25:08.821882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.863 [2024-07-25 15:25:08.821886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.863 [2024-07-25 15:25:08.821897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.863 qpair failed and we were unable to recover it. 00:29:16.863 [2024-07-25 15:25:08.831737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.863 [2024-07-25 15:25:08.831840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.863 [2024-07-25 15:25:08.831856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.863 [2024-07-25 15:25:08.831861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.863 [2024-07-25 15:25:08.831865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.863 [2024-07-25 15:25:08.831876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.863 qpair failed and we were unable to recover it. 00:29:16.863 [2024-07-25 15:25:08.841881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.863 [2024-07-25 15:25:08.841968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.863 [2024-07-25 15:25:08.841987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.863 [2024-07-25 15:25:08.841993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.863 [2024-07-25 15:25:08.841998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.863 [2024-07-25 15:25:08.842013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.863 qpair failed and we were unable to recover it. 00:29:16.863 [2024-07-25 15:25:08.851823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.863 [2024-07-25 15:25:08.851943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.863 [2024-07-25 15:25:08.851957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.863 [2024-07-25 15:25:08.851962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.863 [2024-07-25 15:25:08.851967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.851979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.861884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.861995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.862014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.862021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.862026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.862041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.871920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.872001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.872020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.872027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.872031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.872050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.881993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.882082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.882101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.882107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.882112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.882128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.891947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.892025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.892039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.892044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.892048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.892060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.902006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.902089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.902102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.902107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.902111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.902122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.911989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.912061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.912073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.912079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.912083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.912094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.922090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.922170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.922183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.922189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.922193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.922208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.932087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.932163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.932176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.932181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.932185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.932196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.942114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.942221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.942234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.942239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.942244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.942255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.952116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.952189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.952204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.952210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.952214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.952225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.962182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.962316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.962329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.962334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.962342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.962353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.972205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.972288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.864 [2024-07-25 15:25:08.972301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.864 [2024-07-25 15:25:08.972306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.864 [2024-07-25 15:25:08.972310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.864 [2024-07-25 15:25:08.972321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.864 qpair failed and we were unable to recover it. 00:29:16.864 [2024-07-25 15:25:08.982189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.864 [2024-07-25 15:25:08.982294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.865 [2024-07-25 15:25:08.982307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.865 [2024-07-25 15:25:08.982312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.865 [2024-07-25 15:25:08.982316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.865 [2024-07-25 15:25:08.982328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.865 qpair failed and we were unable to recover it. 00:29:16.865 [2024-07-25 15:25:08.992275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.865 [2024-07-25 15:25:08.992385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.865 [2024-07-25 15:25:08.992398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.865 [2024-07-25 15:25:08.992403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.865 [2024-07-25 15:25:08.992407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.865 [2024-07-25 15:25:08.992419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.865 qpair failed and we were unable to recover it. 00:29:16.865 [2024-07-25 15:25:09.002286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.865 [2024-07-25 15:25:09.002370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.865 [2024-07-25 15:25:09.002384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.865 [2024-07-25 15:25:09.002389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.865 [2024-07-25 15:25:09.002393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.865 [2024-07-25 15:25:09.002405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.865 qpair failed and we were unable to recover it. 00:29:16.865 [2024-07-25 15:25:09.012304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.865 [2024-07-25 15:25:09.012379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.865 [2024-07-25 15:25:09.012392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.865 [2024-07-25 15:25:09.012397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.865 [2024-07-25 15:25:09.012401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.865 [2024-07-25 15:25:09.012413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.865 qpair failed and we were unable to recover it. 00:29:16.865 [2024-07-25 15:25:09.022353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.865 [2024-07-25 15:25:09.022449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.865 [2024-07-25 15:25:09.022462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.865 [2024-07-25 15:25:09.022467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.865 [2024-07-25 15:25:09.022471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.865 [2024-07-25 15:25:09.022482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.865 qpair failed and we were unable to recover it. 00:29:16.865 [2024-07-25 15:25:09.032339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.865 [2024-07-25 15:25:09.032413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.865 [2024-07-25 15:25:09.032426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.865 [2024-07-25 15:25:09.032431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.865 [2024-07-25 15:25:09.032435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.865 [2024-07-25 15:25:09.032447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.865 qpair failed and we were unable to recover it. 00:29:16.865 [2024-07-25 15:25:09.042405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.865 [2024-07-25 15:25:09.042487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.865 [2024-07-25 15:25:09.042500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.865 [2024-07-25 15:25:09.042505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.865 [2024-07-25 15:25:09.042509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:16.865 [2024-07-25 15:25:09.042521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.865 qpair failed and we were unable to recover it. 00:29:16.865 [2024-07-25 15:25:09.052394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.865 [2024-07-25 15:25:09.052468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.865 [2024-07-25 15:25:09.052480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.865 [2024-07-25 15:25:09.052488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.865 [2024-07-25 15:25:09.052493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:17.128 [2024-07-25 15:25:09.052506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.128 qpair failed and we were unable to recover it. 00:29:17.128 [2024-07-25 15:25:09.062437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.128 [2024-07-25 15:25:09.062525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.128 [2024-07-25 15:25:09.062537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.128 [2024-07-25 15:25:09.062543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.128 [2024-07-25 15:25:09.062547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:17.128 [2024-07-25 15:25:09.062559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.128 qpair failed and we were unable to recover it. 00:29:17.128 [2024-07-25 15:25:09.072451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.128 [2024-07-25 15:25:09.072523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.128 [2024-07-25 15:25:09.072536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.128 [2024-07-25 15:25:09.072541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.128 [2024-07-25 15:25:09.072545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:17.128 [2024-07-25 15:25:09.072556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.128 qpair failed and we were unable to recover it. 00:29:17.128 [2024-07-25 15:25:09.082520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.128 [2024-07-25 15:25:09.082645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.128 [2024-07-25 15:25:09.082658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.128 [2024-07-25 15:25:09.082663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.128 [2024-07-25 15:25:09.082667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:17.128 [2024-07-25 15:25:09.082678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.128 qpair failed and we were unable to recover it. 00:29:17.128 [2024-07-25 15:25:09.092539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.128 [2024-07-25 15:25:09.092610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.128 [2024-07-25 15:25:09.092623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.128 [2024-07-25 15:25:09.092628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.128 [2024-07-25 15:25:09.092632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc90000b90 00:29:17.128 [2024-07-25 15:25:09.092643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.128 qpair failed and we were unable to recover it. 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Write completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 Read completed with error (sct=0, sc=8) 00:29:17.128 starting I/O failed 00:29:17.128 [2024-07-25 15:25:09.093848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.128 [2024-07-25 15:25:09.102715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.128 [2024-07-25 15:25:09.102987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-25 15:25:09.103055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-25 15:25:09.103081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-25 15:25:09.103101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc88000b90 00:29:17.129 [2024-07-25 15:25:09.103157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-25 15:25:09.112645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-25 15:25:09.112833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-25 15:25:09.112881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-25 15:25:09.112901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-25 15:25:09.112916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc88000b90 00:29:17.129 [2024-07-25 15:25:09.112956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-25 15:25:09.122750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-25 15:25:09.123002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-25 15:25:09.123072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-25 15:25:09.123098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-25 15:25:09.123117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc98000b90 00:29:17.129 [2024-07-25 15:25:09.123174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Write completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 Read completed with error (sct=0, sc=8) 00:29:17.129 starting I/O failed 00:29:17.129 [2024-07-25 15:25:09.123500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.129 [2024-07-25 15:25:09.132659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-25 15:25:09.132807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-25 15:25:09.132829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-25 15:25:09.132837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-25 15:25:09.132844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4220 00:29:17.129 [2024-07-25 15:25:09.132862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-25 15:25:09.142688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-25 15:25:09.142816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-25 15:25:09.142847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-25 15:25:09.142856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-25 15:25:09.142863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc4220 00:29:17.129 [2024-07-25 15:25:09.142884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-25 15:25:09.143342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1f20 is same with the state(5) to be set 00:29:17.129 [2024-07-25 15:25:09.152714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-25 15:25:09.152960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-25 15:25:09.153028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-25 15:25:09.153054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-25 15:25:09.153075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdc98000b90 00:29:17.129 [2024-07-25 15:25:09.153131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-25 15:25:09.153657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd1f20 (9): Bad file descriptor 00:29:17.129 Initializing NVMe Controllers 00:29:17.129 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:17.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:17.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:17.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:17.129 Initialization complete. Launching workers. 00:29:17.129 Starting thread on core 1 00:29:17.129 Starting thread on core 2 00:29:17.129 Starting thread on core 3 00:29:17.129 Starting thread on core 0 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:17.129 00:29:17.129 real 0m11.371s 00:29:17.129 user 0m20.664s 00:29:17.129 sys 0m4.031s 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.129 ************************************ 00:29:17.129 END TEST nvmf_target_disconnect_tc2 00:29:17.129 ************************************ 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.129 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.129 rmmod nvme_tcp 00:29:17.129 rmmod nvme_fabrics 00:29:17.129 rmmod nvme_keyring 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 436620 ']' 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 436620 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 436620 ']' 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 436620 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.130 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 436620 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 436620' 00:29:17.391 killing process with pid 436620 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 436620 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 436620 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.391 15:25:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.941 15:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:19.941 00:29:19.941 real 0m21.186s 00:29:19.941 user 0m48.387s 00:29:19.941 sys 0m9.642s 00:29:19.941 15:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.941 15:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:19.941 ************************************ 00:29:19.941 END TEST nvmf_target_disconnect 00:29:19.941 ************************************ 00:29:19.941 15:25:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:19.941 00:29:19.941 real 6m16.226s 00:29:19.941 user 11m6.440s 00:29:19.941 sys 2m4.646s 00:29:19.941 15:25:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.941 15:25:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.941 ************************************ 00:29:19.941 END TEST nvmf_host 00:29:19.941 ************************************ 00:29:19.941 00:29:19.941 real 22m44.085s 00:29:19.941 user 47m30.733s 00:29:19.941 sys 7m12.104s 00:29:19.941 15:25:11 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.941 15:25:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.941 ************************************ 00:29:19.941 END TEST nvmf_tcp 00:29:19.941 ************************************ 00:29:19.941 15:25:11 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:29:19.941 15:25:11 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:19.941 15:25:11 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:19.941 15:25:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:19.941 15:25:11 -- common/autotest_common.sh@10 -- # set +x 00:29:19.941 ************************************ 00:29:19.941 START TEST spdkcli_nvmf_tcp 00:29:19.941 ************************************ 00:29:19.941 15:25:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:19.941 * Looking for test storage... 00:29:19.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:19.941 15:25:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:19.941 15:25:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:19.941 15:25:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:19.941 15:25:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=438478 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 438478 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 438478 ']' 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:19.942 15:25:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.942 [2024-07-25 15:25:11.893044] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:19.942 [2024-07-25 15:25:11.893117] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438478 ] 00:29:19.942 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.942 [2024-07-25 15:25:11.958828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:19.942 [2024-07-25 15:25:12.035279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.942 [2024-07-25 15:25:12.035429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:20.514 15:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.776 15:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:20.776 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:20.776 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:20.776 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:20.776 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:20.776 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:20.776 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:20.776 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:20.776 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:20.776 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:20.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:20.776 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:20.776 ' 00:29:23.327 [2024-07-25 15:25:15.032796] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.270 [2024-07-25 15:25:16.196594] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:26.192 [2024-07-25 15:25:18.338924] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:28.109 [2024-07-25 15:25:20.176559] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:29.496 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:29.496 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:29.496 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:29.496 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:29.496 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:29.496 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:29.496 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:29.496 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:29.496 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:29.496 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:29.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:29.496 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:29.758 15:25:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:29.758 15:25:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:29.758 15:25:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.758 15:25:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:29.758 15:25:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:29.758 15:25:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.758 15:25:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:29.758 15:25:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:30.019 15:25:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:30.019 15:25:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:30.019 15:25:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:30.019 15:25:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.019 15:25:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.019 15:25:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:30.019 15:25:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.019 15:25:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.281 15:25:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:30.281 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:30.281 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:30.281 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:30.281 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:30.281 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:30.281 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:30.281 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:30.281 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:30.281 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:30.281 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:30.281 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:30.281 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:30.281 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:30.281 ' 00:29:35.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:35.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:35.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:35.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:35.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:35.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:35.573 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:35.573 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:35.573 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:35.573 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:35.573 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:35.573 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:35.573 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:35.573 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 438478 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 438478 ']' 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 438478 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 438478 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:35.573 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:35.574 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 438478' 00:29:35.574 killing process with pid 438478 00:29:35.574 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 438478 00:29:35.574 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 438478 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 438478 ']' 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 438478 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 438478 ']' 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 438478 00:29:35.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (438478) - No such process 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 438478 is not found' 00:29:35.835 Process with pid 438478 is not found 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:35.835 00:29:35.835 real 0m16.153s 00:29:35.835 user 0m34.059s 00:29:35.835 sys 0m0.777s 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:35.835 15:25:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.835 ************************************ 00:29:35.835 END TEST spdkcli_nvmf_tcp 00:29:35.835 ************************************ 00:29:35.835 15:25:27 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:35.835 15:25:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:35.835 15:25:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:35.835 15:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:35.835 ************************************ 00:29:35.835 START TEST nvmf_identify_passthru 00:29:35.835 ************************************ 00:29:35.835 15:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:35.835 * Looking for test storage... 00:29:35.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:36.097 15:25:28 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.097 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.097 15:25:28 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.097 15:25:28 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.097 15:25:28 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.097 15:25:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.097 15:25:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.098 15:25:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.098 15:25:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:36.098 15:25:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:36.098 15:25:28 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.098 15:25:28 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.098 15:25:28 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.098 15:25:28 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.098 15:25:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.098 15:25:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.098 15:25:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.098 15:25:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:36.098 15:25:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.098 15:25:28 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.098 15:25:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:36.098 15:25:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:36.098 15:25:28 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:36.098 15:25:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:42.684 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:42.684 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:42.684 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.684 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:42.685 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.685 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.946 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.946 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.946 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:42.946 15:25:34 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:42.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:29:42.946 00:29:42.946 --- 10.0.0.2 ping statistics --- 00:29:42.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.946 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:29:42.946 00:29:42.946 --- 10.0.0.1 ping statistics --- 00:29:42.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.946 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:42.946 15:25:35 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:42.946 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:42.946 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.946 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:43.207 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:29:43.207 15:25:35 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:29:43.207 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:29:43.207 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:29:43.207 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:43.207 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:43.207 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:43.207 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.780 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:29:43.780 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:43.780 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:43.780 15:25:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:43.780 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.042 15:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:29:44.043 15:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.043 15:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.043 15:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=445372 00:29:44.043 15:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:44.043 15:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:44.043 15:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 445372 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 445372 ']' 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:44.043 15:25:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.305 [2024-07-25 15:25:36.273768] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:44.305 [2024-07-25 15:25:36.273825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.305 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.305 [2024-07-25 15:25:36.340089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:44.305 [2024-07-25 15:25:36.410347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.305 [2024-07-25 15:25:36.410382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.305 [2024-07-25 15:25:36.410390] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.305 [2024-07-25 15:25:36.410397] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.305 [2024-07-25 15:25:36.410403] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.305 [2024-07-25 15:25:36.410544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.305 [2024-07-25 15:25:36.410671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.305 [2024-07-25 15:25:36.411074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.305 [2024-07-25 15:25:36.411074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.879 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:44.879 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:29:44.879 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:44.879 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.879 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.879 INFO: Log level set to 20 00:29:44.879 INFO: Requests: 00:29:44.879 { 00:29:44.879 "jsonrpc": "2.0", 00:29:44.879 "method": "nvmf_set_config", 00:29:44.879 "id": 1, 00:29:44.879 "params": { 00:29:44.879 "admin_cmd_passthru": { 00:29:44.879 "identify_ctrlr": true 00:29:44.879 } 00:29:44.879 } 00:29:44.879 } 00:29:44.879 00:29:44.879 INFO: response: 00:29:44.879 { 00:29:44.879 "jsonrpc": "2.0", 00:29:44.879 "id": 1, 00:29:44.879 "result": true 00:29:44.879 } 00:29:44.879 00:29:44.879 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.879 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:44.879 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.879 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.879 INFO: Setting log level to 20 00:29:44.879 INFO: Setting log level to 20 00:29:44.879 INFO: Log level set to 20 00:29:44.879 INFO: Log level set to 20 00:29:44.879 INFO: Requests: 00:29:44.879 { 00:29:44.879 "jsonrpc": "2.0", 00:29:44.879 "method": "framework_start_init", 00:29:44.879 "id": 1 00:29:44.879 } 00:29:44.879 00:29:44.879 INFO: Requests: 00:29:44.879 { 00:29:44.879 "jsonrpc": "2.0", 00:29:44.879 "method": "framework_start_init", 00:29:44.879 "id": 1 00:29:44.879 } 00:29:44.879 00:29:45.141 [2024-07-25 15:25:37.124628] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:45.141 INFO: response: 00:29:45.141 { 00:29:45.141 "jsonrpc": "2.0", 00:29:45.141 "id": 1, 00:29:45.141 "result": true 00:29:45.141 } 00:29:45.141 00:29:45.141 INFO: response: 00:29:45.141 { 00:29:45.141 "jsonrpc": "2.0", 00:29:45.141 "id": 1, 00:29:45.141 "result": true 00:29:45.141 } 00:29:45.141 00:29:45.141 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.141 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:45.141 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.141 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.141 INFO: Setting log level to 40 00:29:45.141 INFO: Setting log level to 40 00:29:45.141 INFO: Setting log level to 40 00:29:45.141 [2024-07-25 15:25:37.137952] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.141 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.141 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:45.141 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:45.141 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.141 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:29:45.141 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.141 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.403 Nvme0n1 00:29:45.403 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.403 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:45.403 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.403 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.403 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.403 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:45.403 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.403 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.403 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.403 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.403 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.403 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.404 [2024-07-25 15:25:37.525534] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.404 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.404 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:45.404 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.404 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.404 [ 00:29:45.404 { 00:29:45.404 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:45.404 "subtype": "Discovery", 00:29:45.404 "listen_addresses": [], 00:29:45.404 "allow_any_host": true, 00:29:45.404 "hosts": [] 00:29:45.404 }, 00:29:45.404 { 00:29:45.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.404 "subtype": "NVMe", 00:29:45.404 "listen_addresses": [ 00:29:45.404 { 00:29:45.404 "trtype": "TCP", 00:29:45.404 "adrfam": "IPv4", 00:29:45.404 "traddr": "10.0.0.2", 00:29:45.404 "trsvcid": "4420" 00:29:45.404 } 00:29:45.404 ], 00:29:45.404 "allow_any_host": true, 00:29:45.404 "hosts": [], 00:29:45.404 "serial_number": "SPDK00000000000001", 00:29:45.404 "model_number": "SPDK bdev Controller", 00:29:45.404 "max_namespaces": 1, 00:29:45.404 "min_cntlid": 1, 00:29:45.404 "max_cntlid": 65519, 00:29:45.404 "namespaces": [ 00:29:45.404 { 00:29:45.404 "nsid": 1, 00:29:45.404 "bdev_name": "Nvme0n1", 00:29:45.404 "name": "Nvme0n1", 00:29:45.404 "nguid": "36344730526054870025384500000044", 00:29:45.404 "uuid": "36344730-5260-5487-0025-384500000044" 00:29:45.404 } 00:29:45.404 ] 00:29:45.404 } 00:29:45.404 ] 00:29:45.404 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.404 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:45.404 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:45.404 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:45.404 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.665 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:29:45.665 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:45.665 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:45.666 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:45.666 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.666 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:29:45.666 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:29:45.666 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:29:45.666 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.666 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.666 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.666 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.666 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:45.666 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:45.666 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:45.666 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:45.666 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:45.666 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:45.666 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:45.666 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:45.666 rmmod nvme_tcp 00:29:45.666 rmmod nvme_fabrics 00:29:45.666 rmmod nvme_keyring 00:29:45.927 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:45.927 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:45.927 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:45.927 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 445372 ']' 00:29:45.927 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 445372 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 445372 ']' 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 445372 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 445372 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 445372' 00:29:45.927 killing process with pid 445372 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 445372 00:29:45.927 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 445372 00:29:46.189 15:25:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:46.189 15:25:38 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:46.189 15:25:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:46.189 15:25:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:46.189 15:25:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:46.189 15:25:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.189 15:25:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:46.189 15:25:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.147 15:25:40 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:48.147 00:29:48.147 real 0m12.343s 00:29:48.147 user 0m9.503s 00:29:48.147 sys 0m5.867s 00:29:48.147 15:25:40 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:48.147 15:25:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.147 ************************************ 00:29:48.147 END TEST nvmf_identify_passthru 00:29:48.147 ************************************ 00:29:48.147 15:25:40 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:48.147 15:25:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:48.147 15:25:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:48.147 15:25:40 -- common/autotest_common.sh@10 -- # set +x 00:29:48.409 ************************************ 00:29:48.409 START TEST nvmf_dif 00:29:48.409 ************************************ 00:29:48.409 15:25:40 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:48.409 * Looking for test storage... 00:29:48.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:48.409 15:25:40 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.409 15:25:40 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.409 15:25:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.409 15:25:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.409 15:25:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.409 15:25:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.409 15:25:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.409 15:25:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.409 15:25:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:48.410 15:25:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:48.410 15:25:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:48.410 15:25:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:48.410 15:25:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:48.410 15:25:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:48.410 15:25:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.410 15:25:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:48.410 15:25:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:48.410 15:25:40 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:48.410 15:25:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:55.042 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:55.042 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:55.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:55.042 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:55.042 15:25:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.043 15:25:47 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.304 15:25:47 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:55.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:55.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.743 ms 00:29:55.305 00:29:55.305 --- 10.0.0.2 ping statistics --- 00:29:55.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.305 rtt min/avg/max/mdev = 0.743/0.743/0.743/0.000 ms 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:55.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:55.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:29:55.305 00:29:55.305 --- 10.0.0.1 ping statistics --- 00:29:55.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.305 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:55.305 15:25:47 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:58.612 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:58.612 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:58.612 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:59.185 15:25:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:59.185 15:25:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:59.185 15:25:51 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.185 15:25:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=451266 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 451266 00:29:59.185 15:25:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:59.185 15:25:51 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 451266 ']' 00:29:59.185 15:25:51 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.185 15:25:51 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:59.185 15:25:51 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.185 15:25:51 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:59.185 15:25:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.185 [2024-07-25 15:25:51.250511] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:59.185 [2024-07-25 15:25:51.250567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.185 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.185 [2024-07-25 15:25:51.317886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.447 [2024-07-25 15:25:51.382083] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.447 [2024-07-25 15:25:51.382119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.447 [2024-07-25 15:25:51.382127] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.447 [2024-07-25 15:25:51.382133] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.447 [2024-07-25 15:25:51.382139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.447 [2024-07-25 15:25:51.382162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:29:59.447 15:25:51 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.447 15:25:51 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.447 15:25:51 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:59.447 15:25:51 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.447 [2024-07-25 15:25:51.519193] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.447 15:25:51 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:59.447 15:25:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.447 ************************************ 00:29:59.447 START TEST fio_dif_1_default 00:29:59.447 ************************************ 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.447 bdev_null0 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.447 [2024-07-25 15:25:51.607551] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:59.447 { 00:29:59.447 "params": { 00:29:59.447 "name": "Nvme$subsystem", 00:29:59.447 "trtype": "$TEST_TRANSPORT", 00:29:59.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.447 "adrfam": "ipv4", 00:29:59.447 "trsvcid": "$NVMF_PORT", 00:29:59.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.447 "hdgst": ${hdgst:-false}, 00:29:59.447 "ddgst": ${ddgst:-false} 00:29:59.447 }, 00:29:59.447 "method": "bdev_nvme_attach_controller" 00:29:59.447 } 00:29:59.447 EOF 00:29:59.447 )") 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:59.447 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:59.448 15:25:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:59.448 "params": { 00:29:59.448 "name": "Nvme0", 00:29:59.448 "trtype": "tcp", 00:29:59.448 "traddr": "10.0.0.2", 00:29:59.448 "adrfam": "ipv4", 00:29:59.448 "trsvcid": "4420", 00:29:59.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:59.448 "hdgst": false, 00:29:59.448 "ddgst": false 00:29:59.448 }, 00:29:59.448 "method": "bdev_nvme_attach_controller" 00:29:59.448 }' 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:59.751 15:25:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:00.019 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:00.019 fio-3.35 00:30:00.019 Starting 1 thread 00:30:00.019 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.248 00:30:12.248 filename0: (groupid=0, jobs=1): err= 0: pid=451748: Thu Jul 25 15:26:02 2024 00:30:12.248 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:30:12.248 slat (nsec): min=5367, max=33815, avg=6162.54, stdev=1663.51 00:30:12.248 clat (usec): min=41851, max=44110, avg=41996.13, stdev=154.97 00:30:12.248 lat (usec): min=41859, max=44143, avg=42002.30, stdev=155.58 00:30:12.248 clat percentiles (usec): 00:30:12.248 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:12.248 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:12.248 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:12.248 | 99.00th=[42206], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:30:12.248 | 99.99th=[44303] 00:30:12.248 bw ( KiB/s): min= 352, max= 384, per=99.79%, avg=380.80, stdev= 9.85, samples=20 00:30:12.248 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:12.248 lat (msec) : 50=100.00% 00:30:12.248 cpu : usr=95.75%, sys=4.05%, ctx=10, majf=0, minf=231 00:30:12.248 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:12.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.248 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:12.248 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:12.248 00:30:12.248 Run status group 0 (all jobs): 00:30:12.248 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10042-10042msec 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 00:30:12.248 real 0m11.236s 00:30:12.248 user 0m25.454s 00:30:12.248 sys 0m0.685s 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 ************************************ 00:30:12.248 END TEST fio_dif_1_default 00:30:12.248 ************************************ 00:30:12.248 15:26:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:12.248 15:26:02 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:12.248 15:26:02 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 ************************************ 00:30:12.248 START TEST fio_dif_1_multi_subsystems 00:30:12.248 ************************************ 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 bdev_null0 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 [2024-07-25 15:26:02.921281] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 bdev_null1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:12.248 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.248 { 00:30:12.248 "params": { 00:30:12.248 "name": "Nvme$subsystem", 00:30:12.248 "trtype": "$TEST_TRANSPORT", 00:30:12.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.249 "adrfam": "ipv4", 00:30:12.249 "trsvcid": "$NVMF_PORT", 00:30:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.249 "hdgst": ${hdgst:-false}, 00:30:12.249 "ddgst": ${ddgst:-false} 00:30:12.249 }, 00:30:12.249 "method": "bdev_nvme_attach_controller" 00:30:12.249 } 00:30:12.249 EOF 00:30:12.249 )") 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.249 { 00:30:12.249 "params": { 00:30:12.249 "name": "Nvme$subsystem", 00:30:12.249 "trtype": "$TEST_TRANSPORT", 00:30:12.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.249 "adrfam": "ipv4", 00:30:12.249 "trsvcid": "$NVMF_PORT", 00:30:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.249 "hdgst": ${hdgst:-false}, 00:30:12.249 "ddgst": ${ddgst:-false} 00:30:12.249 }, 00:30:12.249 "method": "bdev_nvme_attach_controller" 00:30:12.249 } 00:30:12.249 EOF 00:30:12.249 )") 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:12.249 15:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:12.249 "params": { 00:30:12.249 "name": "Nvme0", 00:30:12.249 "trtype": "tcp", 00:30:12.249 "traddr": "10.0.0.2", 00:30:12.249 "adrfam": "ipv4", 00:30:12.249 "trsvcid": "4420", 00:30:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:12.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:12.249 "hdgst": false, 00:30:12.249 "ddgst": false 00:30:12.249 }, 00:30:12.249 "method": "bdev_nvme_attach_controller" 00:30:12.249 },{ 00:30:12.249 "params": { 00:30:12.249 "name": "Nvme1", 00:30:12.249 "trtype": "tcp", 00:30:12.249 "traddr": "10.0.0.2", 00:30:12.249 "adrfam": "ipv4", 00:30:12.249 "trsvcid": "4420", 00:30:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.249 "hdgst": false, 00:30:12.249 "ddgst": false 00:30:12.249 }, 00:30:12.249 "method": "bdev_nvme_attach_controller" 00:30:12.249 }' 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:12.249 15:26:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:12.249 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:12.249 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:12.249 fio-3.35 00:30:12.249 Starting 2 threads 00:30:12.249 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.245 00:30:22.245 filename0: (groupid=0, jobs=1): err= 0: pid=454149: Thu Jul 25 15:26:14 2024 00:30:22.245 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:30:22.245 slat (nsec): min=5384, max=55098, avg=6504.26, stdev=2955.57 00:30:22.245 clat (usec): min=41827, max=43488, avg=41988.41, stdev=109.53 00:30:22.245 lat (usec): min=41844, max=43525, avg=41994.91, stdev=110.12 00:30:22.245 clat percentiles (usec): 00:30:22.245 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:22.245 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:22.245 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:22.245 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:22.245 | 99.99th=[43254] 00:30:22.245 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:30:22.245 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:22.245 lat (msec) : 50=100.00% 00:30:22.245 cpu : usr=97.09%, sys=2.71%, ctx=14, majf=0, minf=200 00:30:22.245 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.245 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.245 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:22.245 filename1: (groupid=0, jobs=1): err= 0: pid=454151: Thu Jul 25 15:26:14 2024 00:30:22.245 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:30:22.245 slat (nsec): min=5374, max=36470, avg=6435.83, stdev=2467.35 00:30:22.245 clat (usec): min=41171, max=43481, avg=41997.18, stdev=159.91 00:30:22.245 lat (usec): min=41177, max=43517, avg=42003.61, stdev=160.74 00:30:22.245 clat percentiles (usec): 00:30:22.245 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:22.245 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:22.245 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:22.245 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:30:22.245 | 99.99th=[43254] 00:30:22.245 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:30:22.245 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:22.245 lat (msec) : 50=100.00% 00:30:22.245 cpu : usr=96.78%, sys=3.02%, ctx=14, majf=0, minf=70 00:30:22.245 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.245 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.245 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:22.245 00:30:22.245 Run status group 0 (all jobs): 00:30:22.245 READ: bw=762KiB/s (780kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7648KiB (7832kB), run=10040-10042msec 00:30:22.245 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:22.245 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:22.245 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:22.245 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:22.245 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:22.245 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:22.245 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.245 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.245 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.246 00:30:22.246 real 0m11.460s 00:30:22.246 user 0m34.635s 00:30:22.246 sys 0m0.954s 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:22.246 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.246 ************************************ 00:30:22.246 END TEST fio_dif_1_multi_subsystems 00:30:22.246 ************************************ 00:30:22.246 15:26:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:22.246 15:26:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:22.246 15:26:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:22.246 15:26:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:22.246 ************************************ 00:30:22.246 START TEST fio_dif_rand_params 00:30:22.246 ************************************ 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.246 bdev_null0 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.246 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.544 [2024-07-25 15:26:14.463168] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:22.544 { 00:30:22.544 "params": { 00:30:22.544 "name": "Nvme$subsystem", 00:30:22.544 "trtype": "$TEST_TRANSPORT", 00:30:22.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:22.544 "adrfam": "ipv4", 00:30:22.544 "trsvcid": "$NVMF_PORT", 00:30:22.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:22.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:22.544 "hdgst": ${hdgst:-false}, 00:30:22.544 "ddgst": ${ddgst:-false} 00:30:22.544 }, 00:30:22.544 "method": "bdev_nvme_attach_controller" 00:30:22.544 } 00:30:22.544 EOF 00:30:22.544 )") 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:22.544 "params": { 00:30:22.544 "name": "Nvme0", 00:30:22.544 "trtype": "tcp", 00:30:22.544 "traddr": "10.0.0.2", 00:30:22.544 "adrfam": "ipv4", 00:30:22.544 "trsvcid": "4420", 00:30:22.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:22.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:22.544 "hdgst": false, 00:30:22.544 "ddgst": false 00:30:22.544 }, 00:30:22.544 "method": "bdev_nvme_attach_controller" 00:30:22.544 }' 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.544 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.545 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:22.545 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:22.545 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:22.545 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:22.545 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:22.545 15:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.811 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:22.811 ... 00:30:22.811 fio-3.35 00:30:22.811 Starting 3 threads 00:30:22.811 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.387 00:30:29.387 filename0: (groupid=0, jobs=1): err= 0: pid=456469: Thu Jul 25 15:26:20 2024 00:30:29.387 read: IOPS=116, BW=14.6MiB/s (15.3MB/s)(73.5MiB/5047msec) 00:30:29.387 slat (nsec): min=5386, max=43638, avg=7469.18, stdev=2305.01 00:30:29.387 clat (usec): min=6038, max=98495, avg=25656.63, stdev=21363.60 00:30:29.387 lat (usec): min=6044, max=98501, avg=25664.10, stdev=21363.77 00:30:29.387 clat percentiles (usec): 00:30:29.387 | 1.00th=[ 6652], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9896], 00:30:29.387 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[13566], 00:30:29.388 | 70.00th=[51119], 80.00th=[53216], 90.00th=[54789], 95.00th=[56361], 00:30:29.388 | 99.00th=[91751], 99.50th=[94897], 99.90th=[98042], 99.95th=[98042], 00:30:29.388 | 99.99th=[98042] 00:30:29.388 bw ( KiB/s): min= 9984, max=26368, per=30.09%, avg=15001.60, stdev=4801.77, samples=10 00:30:29.388 iops : min= 78, max= 206, avg=117.20, stdev=37.51, samples=10 00:30:29.388 lat (msec) : 10=21.09%, 20=45.41%, 50=1.36%, 100=32.14% 00:30:29.388 cpu : usr=96.39%, sys=3.27%, ctx=9, majf=0, minf=125 00:30:29.388 IO depths : 1=4.6%, 2=95.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.388 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.388 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:29.388 filename0: (groupid=0, jobs=1): err= 0: pid=456470: Thu Jul 25 15:26:20 2024 00:30:29.388 read: IOPS=184, BW=23.0MiB/s (24.1MB/s)(115MiB/5006msec) 00:30:29.388 slat (nsec): min=5394, max=32346, avg=7096.63, stdev=1640.65 00:30:29.388 clat (usec): min=6348, max=95694, avg=16275.30, stdev=16312.71 00:30:29.388 lat (usec): min=6356, max=95700, avg=16282.40, stdev=16312.93 00:30:29.388 clat percentiles (usec): 00:30:29.388 | 1.00th=[ 6783], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8225], 00:30:29.388 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10421], 00:30:29.388 | 70.00th=[11338], 80.00th=[12911], 90.00th=[51643], 95.00th=[54789], 00:30:29.388 | 99.00th=[56886], 99.50th=[91751], 99.90th=[95945], 99.95th=[95945], 00:30:29.388 | 99.99th=[95945] 00:30:29.388 bw ( KiB/s): min=11520, max=33536, per=47.18%, avg=23526.40, stdev=7611.86, samples=10 00:30:29.388 iops : min= 90, max= 262, avg=183.80, stdev=59.47, samples=10 00:30:29.388 lat (msec) : 10=53.80%, 20=31.45%, 50=1.95%, 100=12.80% 00:30:29.388 cpu : usr=95.70%, sys=3.94%, ctx=12, majf=0, minf=123 00:30:29.388 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.388 issued rwts: total=922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.388 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:29.388 filename0: (groupid=0, jobs=1): err= 0: pid=456471: Thu Jul 25 15:26:20 2024 00:30:29.388 read: IOPS=91, BW=11.4MiB/s (11.9MB/s)(57.0MiB/5008msec) 00:30:29.388 slat (nsec): min=5403, max=33498, avg=7187.07, stdev=1888.66 00:30:29.388 clat (msec): min=7, max=101, avg=32.93, stdev=23.70 00:30:29.388 lat (msec): min=7, max=101, avg=32.94, stdev=23.70 00:30:29.388 clat percentiles (msec): 00:30:29.388 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:30:29.388 | 30.00th=[ 13], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 53], 00:30:29.388 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 59], 95.00th=[ 61], 00:30:29.388 | 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:30:29.388 | 99.99th=[ 102] 00:30:29.388 bw ( KiB/s): min= 6144, max=15360, per=23.26%, avg=11596.80, stdev=2520.01, samples=10 00:30:29.388 iops : min= 48, max= 120, avg=90.60, stdev=19.69, samples=10 00:30:29.388 lat (msec) : 10=13.16%, 20=41.67%, 50=2.85%, 100=41.23%, 250=1.10% 00:30:29.388 cpu : usr=97.08%, sys=2.62%, ctx=10, majf=0, minf=74 00:30:29.388 IO depths : 1=6.8%, 2=93.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.388 issued rwts: total=456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.388 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:29.388 00:30:29.388 Run status group 0 (all jobs): 00:30:29.388 READ: bw=48.7MiB/s (51.1MB/s), 11.4MiB/s-23.0MiB/s (11.9MB/s-24.1MB/s), io=246MiB (258MB), run=5006-5047msec 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 bdev_null0 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 [2024-07-25 15:26:20.585125] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 bdev_null1 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.388 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 bdev_null2 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.389 { 00:30:29.389 "params": { 00:30:29.389 "name": "Nvme$subsystem", 00:30:29.389 "trtype": "$TEST_TRANSPORT", 00:30:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.389 "adrfam": "ipv4", 00:30:29.389 "trsvcid": "$NVMF_PORT", 00:30:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.389 "hdgst": ${hdgst:-false}, 00:30:29.389 "ddgst": ${ddgst:-false} 00:30:29.389 }, 00:30:29.389 "method": "bdev_nvme_attach_controller" 00:30:29.389 } 00:30:29.389 EOF 00:30:29.389 )") 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.389 { 00:30:29.389 "params": { 00:30:29.389 "name": "Nvme$subsystem", 00:30:29.389 "trtype": "$TEST_TRANSPORT", 00:30:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.389 "adrfam": "ipv4", 00:30:29.389 "trsvcid": "$NVMF_PORT", 00:30:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.389 "hdgst": ${hdgst:-false}, 00:30:29.389 "ddgst": ${ddgst:-false} 00:30:29.389 }, 00:30:29.389 "method": "bdev_nvme_attach_controller" 00:30:29.389 } 00:30:29.389 EOF 00:30:29.389 )") 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.389 { 00:30:29.389 "params": { 00:30:29.389 "name": "Nvme$subsystem", 00:30:29.389 "trtype": "$TEST_TRANSPORT", 00:30:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.389 "adrfam": "ipv4", 00:30:29.389 "trsvcid": "$NVMF_PORT", 00:30:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.389 "hdgst": ${hdgst:-false}, 00:30:29.389 "ddgst": ${ddgst:-false} 00:30:29.389 }, 00:30:29.389 "method": "bdev_nvme_attach_controller" 00:30:29.389 } 00:30:29.389 EOF 00:30:29.389 )") 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:29.389 "params": { 00:30:29.389 "name": "Nvme0", 00:30:29.389 "trtype": "tcp", 00:30:29.389 "traddr": "10.0.0.2", 00:30:29.389 "adrfam": "ipv4", 00:30:29.389 "trsvcid": "4420", 00:30:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:29.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:29.389 "hdgst": false, 00:30:29.389 "ddgst": false 00:30:29.389 }, 00:30:29.389 "method": "bdev_nvme_attach_controller" 00:30:29.389 },{ 00:30:29.389 "params": { 00:30:29.389 "name": "Nvme1", 00:30:29.389 "trtype": "tcp", 00:30:29.389 "traddr": "10.0.0.2", 00:30:29.389 "adrfam": "ipv4", 00:30:29.389 "trsvcid": "4420", 00:30:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:29.389 "hdgst": false, 00:30:29.389 "ddgst": false 00:30:29.389 }, 00:30:29.389 "method": "bdev_nvme_attach_controller" 00:30:29.389 },{ 00:30:29.389 "params": { 00:30:29.389 "name": "Nvme2", 00:30:29.389 "trtype": "tcp", 00:30:29.389 "traddr": "10.0.0.2", 00:30:29.389 "adrfam": "ipv4", 00:30:29.389 "trsvcid": "4420", 00:30:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:29.389 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:29.389 "hdgst": false, 00:30:29.389 "ddgst": false 00:30:29.389 }, 00:30:29.389 "method": "bdev_nvme_attach_controller" 00:30:29.389 }' 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:29.389 15:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.390 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:29.390 ... 00:30:29.390 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:29.390 ... 00:30:29.390 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:29.390 ... 00:30:29.390 fio-3.35 00:30:29.390 Starting 24 threads 00:30:29.390 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.626 00:30:41.626 filename0: (groupid=0, jobs=1): err= 0: pid=457828: Thu Jul 25 15:26:32 2024 00:30:41.626 read: IOPS=519, BW=2077KiB/s (2127kB/s)(20.3MiB/10022msec) 00:30:41.626 slat (usec): min=5, max=113, avg=19.29, stdev=16.19 00:30:41.626 clat (usec): min=3567, max=39117, avg=30654.79, stdev=4335.75 00:30:41.626 lat (usec): min=3592, max=39156, avg=30674.09, stdev=4338.10 00:30:41.626 clat percentiles (usec): 00:30:41.626 | 1.00th=[10683], 5.00th=[20317], 10.00th=[29754], 20.00th=[31065], 00:30:41.626 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:41.626 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:30:41.626 | 99.00th=[33817], 99.50th=[33817], 99.90th=[38011], 99.95th=[38011], 00:30:41.626 | 99.99th=[39060] 00:30:41.626 bw ( KiB/s): min= 1916, max= 3232, per=4.36%, avg=2075.00, stdev=281.26, samples=20 00:30:41.626 iops : min= 479, max= 808, avg=518.75, stdev=70.31, samples=20 00:30:41.626 lat (msec) : 4=0.31%, 10=0.56%, 20=4.13%, 50=95.00% 00:30:41.626 cpu : usr=99.11%, sys=0.58%, ctx=57, majf=0, minf=28 00:30:41.626 IO depths : 1=5.6%, 2=11.4%, 4=23.2%, 8=52.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:30:41.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.626 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.626 issued rwts: total=5204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.626 filename0: (groupid=0, jobs=1): err= 0: pid=457829: Thu Jul 25 15:26:32 2024 00:30:41.626 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10016msec) 00:30:41.626 slat (nsec): min=5559, max=73508, avg=10552.97, stdev=7831.40 00:30:41.626 clat (usec): min=14416, max=52620, avg=31883.13, stdev=1793.74 00:30:41.626 lat (usec): min=14421, max=52627, avg=31893.68, stdev=1793.55 00:30:41.626 clat percentiles (usec): 00:30:41.626 | 1.00th=[23987], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:30:41.626 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.626 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:30:41.626 | 99.00th=[34341], 99.50th=[38536], 99.90th=[50070], 99.95th=[50070], 00:30:41.626 | 99.99th=[52691] 00:30:41.626 bw ( KiB/s): min= 1920, max= 2048, per=4.21%, avg=2002.00, stdev=60.00, samples=19 00:30:41.626 iops : min= 480, max= 512, avg=500.42, stdev=14.95, samples=19 00:30:41.626 lat (msec) : 20=0.56%, 50=99.40%, 100=0.04% 00:30:41.626 cpu : usr=99.35%, sys=0.38%, ctx=10, majf=0, minf=46 00:30:41.626 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:41.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.626 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.626 issued rwts: total=5012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.626 filename0: (groupid=0, jobs=1): err= 0: pid=457830: Thu Jul 25 15:26:32 2024 00:30:41.626 read: IOPS=509, BW=2039KiB/s (2088kB/s)(19.9MiB/10017msec) 00:30:41.626 slat (usec): min=5, max=211, avg=12.93, stdev=10.83 00:30:41.626 clat (usec): min=12585, max=56296, avg=31278.30, stdev=5412.88 00:30:41.626 lat (usec): min=12614, max=56303, avg=31291.23, stdev=5414.23 00:30:41.626 clat percentiles (usec): 00:30:41.626 | 1.00th=[17433], 5.00th=[21103], 10.00th=[23200], 20.00th=[30540], 00:30:41.626 | 30.00th=[31327], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:41.626 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33817], 95.00th=[40109], 00:30:41.626 | 99.00th=[52167], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:30:41.626 | 99.99th=[56361] 00:30:41.626 bw ( KiB/s): min= 1872, max= 2224, per=4.28%, avg=2037.75, stdev=104.81, samples=20 00:30:41.626 iops : min= 468, max= 556, avg=509.40, stdev=26.18, samples=20 00:30:41.626 lat (msec) : 20=3.49%, 50=94.65%, 100=1.86% 00:30:41.626 cpu : usr=96.04%, sys=1.96%, ctx=116, majf=0, minf=55 00:30:41.626 IO depths : 1=2.8%, 2=6.5%, 4=17.9%, 8=62.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:30:41.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.626 complete : 0=0.0%, 4=92.3%, 8=2.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.626 issued rwts: total=5107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.626 filename0: (groupid=0, jobs=1): err= 0: pid=457831: Thu Jul 25 15:26:32 2024 00:30:41.626 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10005msec) 00:30:41.626 slat (nsec): min=5538, max=89333, avg=16000.94, stdev=12161.46 00:30:41.626 clat (usec): min=14258, max=64632, avg=32253.32, stdev=4590.57 00:30:41.626 lat (usec): min=14281, max=64654, avg=32269.32, stdev=4591.47 00:30:41.626 clat percentiles (usec): 00:30:41.626 | 1.00th=[17695], 5.00th=[23987], 10.00th=[30278], 20.00th=[31327], 00:30:41.626 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.626 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[40633], 00:30:41.626 | 99.00th=[51119], 99.50th=[56361], 99.90th=[59507], 99.95th=[59507], 00:30:41.626 | 99.99th=[64750] 00:30:41.626 bw ( KiB/s): min= 1795, max= 2096, per=4.15%, avg=1974.63, stdev=82.88, samples=19 00:30:41.626 iops : min= 448, max= 524, avg=493.58, stdev=20.77, samples=19 00:30:41.626 lat (msec) : 20=1.50%, 50=97.33%, 100=1.17% 00:30:41.626 cpu : usr=98.87%, sys=0.74%, ctx=86, majf=0, minf=42 00:30:41.626 IO depths : 1=2.3%, 2=6.0%, 4=17.6%, 8=63.5%, 16=10.5%, 32=0.0%, >=64=0.0% 00:30:41.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.626 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.626 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.626 filename0: (groupid=0, jobs=1): err= 0: pid=457832: Thu Jul 25 15:26:32 2024 00:30:41.626 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10008msec) 00:30:41.626 slat (nsec): min=5539, max=97343, avg=16919.41, stdev=12642.57 00:30:41.626 clat (usec): min=9690, max=63026, avg=32506.07, stdev=3537.21 00:30:41.626 lat (usec): min=9696, max=63033, avg=32522.99, stdev=3536.95 00:30:41.626 clat percentiles (usec): 00:30:41.626 | 1.00th=[22414], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:30:41.626 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.626 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[39060], 00:30:41.627 | 99.00th=[48497], 99.50th=[51643], 99.90th=[61604], 99.95th=[61604], 00:30:41.627 | 99.99th=[63177] 00:30:41.627 bw ( KiB/s): min= 1740, max= 2048, per=4.10%, avg=1951.26, stdev=93.16, samples=19 00:30:41.627 iops : min= 435, max= 512, avg=487.74, stdev=23.32, samples=19 00:30:41.627 lat (msec) : 10=0.12%, 20=0.20%, 50=98.82%, 100=0.86% 00:30:41.627 cpu : usr=99.04%, sys=0.58%, ctx=56, majf=0, minf=40 00:30:41.627 IO depths : 1=1.8%, 2=3.7%, 4=12.5%, 8=70.8%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:41.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 issued rwts: total=4908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.627 filename0: (groupid=0, jobs=1): err= 0: pid=457833: Thu Jul 25 15:26:32 2024 00:30:41.627 read: IOPS=504, BW=2019KiB/s (2068kB/s)(19.8MiB/10015msec) 00:30:41.627 slat (nsec): min=5423, max=98524, avg=15547.15, stdev=13252.96 00:30:41.627 clat (usec): min=8445, max=33927, avg=31565.36, stdev=2569.23 00:30:41.627 lat (usec): min=8453, max=33936, avg=31580.91, stdev=2569.72 00:30:41.627 clat percentiles (usec): 00:30:41.627 | 1.00th=[16909], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:30:41.627 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.627 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:30:41.627 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:30:41.627 | 99.99th=[33817] 00:30:41.627 bw ( KiB/s): min= 1920, max= 2176, per=4.23%, avg=2015.75, stdev=70.30, samples=20 00:30:41.627 iops : min= 480, max= 544, avg=503.90, stdev=17.56, samples=20 00:30:41.627 lat (msec) : 10=0.57%, 20=1.01%, 50=98.42% 00:30:41.627 cpu : usr=99.03%, sys=0.66%, ctx=15, majf=0, minf=30 00:30:41.627 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:41.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.627 filename0: (groupid=0, jobs=1): err= 0: pid=457834: Thu Jul 25 15:26:32 2024 00:30:41.627 read: IOPS=470, BW=1884KiB/s (1929kB/s)(18.4MiB/10022msec) 00:30:41.627 slat (nsec): min=5543, max=85606, avg=15854.67, stdev=12054.75 00:30:41.627 clat (usec): min=14360, max=60442, avg=33851.92, stdev=5905.55 00:30:41.627 lat (usec): min=14389, max=60448, avg=33867.77, stdev=5905.10 00:30:41.627 clat percentiles (usec): 00:30:41.627 | 1.00th=[20055], 5.00th=[24773], 10.00th=[29754], 20.00th=[31327], 00:30:41.627 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32637], 00:30:41.627 | 70.00th=[33162], 80.00th=[38011], 90.00th=[43254], 95.00th=[45876], 00:30:41.627 | 99.00th=[51643], 99.50th=[53216], 99.90th=[54789], 99.95th=[60556], 00:30:41.627 | 99.99th=[60556] 00:30:41.627 bw ( KiB/s): min= 1728, max= 2048, per=3.96%, avg=1885.55, stdev=81.20, samples=20 00:30:41.627 iops : min= 432, max= 512, avg=471.35, stdev=20.35, samples=20 00:30:41.627 lat (msec) : 20=0.70%, 50=97.33%, 100=1.97% 00:30:41.627 cpu : usr=98.88%, sys=0.79%, ctx=16, majf=0, minf=55 00:30:41.627 IO depths : 1=1.1%, 2=2.6%, 4=11.2%, 8=71.4%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:41.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 complete : 0=0.0%, 4=91.3%, 8=5.0%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.627 filename0: (groupid=0, jobs=1): err= 0: pid=457835: Thu Jul 25 15:26:32 2024 00:30:41.627 read: IOPS=508, BW=2035KiB/s (2084kB/s)(19.9MiB/10028msec) 00:30:41.627 slat (usec): min=3, max=107, avg= 9.22, stdev= 7.56 00:30:41.627 clat (usec): min=3685, max=54353, avg=31368.43, stdev=3545.41 00:30:41.627 lat (usec): min=3692, max=54361, avg=31377.65, stdev=3545.79 00:30:41.627 clat percentiles (usec): 00:30:41.627 | 1.00th=[12125], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:30:41.627 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.627 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:30:41.627 | 99.00th=[33817], 99.50th=[33817], 99.90th=[44827], 99.95th=[44827], 00:30:41.627 | 99.99th=[54264] 00:30:41.627 bw ( KiB/s): min= 1920, max= 2432, per=4.27%, avg=2033.90, stdev=123.57, samples=20 00:30:41.627 iops : min= 480, max= 608, avg=508.40, stdev=30.85, samples=20 00:30:41.627 lat (msec) : 4=0.22%, 10=0.65%, 20=2.70%, 50=96.39%, 100=0.04% 00:30:41.627 cpu : usr=97.97%, sys=1.07%, ctx=40, majf=0, minf=54 00:30:41.627 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:41.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 issued rwts: total=5102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.627 filename1: (groupid=0, jobs=1): err= 0: pid=457837: Thu Jul 25 15:26:32 2024 00:30:41.627 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10011msec) 00:30:41.627 slat (nsec): min=5550, max=99421, avg=16471.83, stdev=12770.27 00:30:41.627 clat (usec): min=16542, max=55733, avg=32509.51, stdev=4994.24 00:30:41.627 lat (usec): min=16548, max=55756, avg=32525.98, stdev=4994.05 00:30:41.627 clat percentiles (usec): 00:30:41.627 | 1.00th=[17695], 5.00th=[24249], 10.00th=[30278], 20.00th=[31327], 00:30:41.627 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.627 | 70.00th=[32375], 80.00th=[32900], 90.00th=[38011], 95.00th=[43779], 00:30:41.627 | 99.00th=[48497], 99.50th=[51119], 99.90th=[54264], 99.95th=[55837], 00:30:41.627 | 99.99th=[55837] 00:30:41.627 bw ( KiB/s): min= 1792, max= 2056, per=4.12%, avg=1959.53, stdev=60.91, samples=19 00:30:41.627 iops : min= 448, max= 514, avg=489.84, stdev=15.27, samples=19 00:30:41.627 lat (msec) : 20=2.59%, 50=96.46%, 100=0.96% 00:30:41.627 cpu : usr=98.82%, sys=0.80%, ctx=82, majf=0, minf=30 00:30:41.627 IO depths : 1=2.3%, 2=5.1%, 4=14.0%, 8=66.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:41.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 complete : 0=0.0%, 4=91.9%, 8=4.5%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 issued rwts: total=4910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.627 filename1: (groupid=0, jobs=1): err= 0: pid=457838: Thu Jul 25 15:26:32 2024 00:30:41.627 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10009msec) 00:30:41.627 slat (usec): min=5, max=114, avg=17.21, stdev=15.03 00:30:41.627 clat (usec): min=16163, max=62220, avg=32408.24, stdev=3148.66 00:30:41.627 lat (usec): min=16170, max=62229, avg=32425.45, stdev=3148.09 00:30:41.627 clat percentiles (usec): 00:30:41.627 | 1.00th=[22676], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:30:41.627 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.627 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[35390], 00:30:41.627 | 99.00th=[45351], 99.50th=[50594], 99.90th=[56361], 99.95th=[58459], 00:30:41.627 | 99.99th=[62129] 00:30:41.627 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1960.37, stdev=68.54, samples=19 00:30:41.627 iops : min= 448, max= 512, avg=490.05, stdev=17.09, samples=19 00:30:41.627 lat (msec) : 20=0.20%, 50=99.11%, 100=0.69% 00:30:41.627 cpu : usr=98.93%, sys=0.70%, ctx=153, majf=0, minf=53 00:30:41.627 IO depths : 1=2.4%, 2=4.9%, 4=12.4%, 8=68.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:30:41.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 complete : 0=0.0%, 4=91.2%, 8=4.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.627 filename1: (groupid=0, jobs=1): err= 0: pid=457839: Thu Jul 25 15:26:32 2024 00:30:41.627 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10022msec) 00:30:41.627 slat (nsec): min=4412, max=90584, avg=17529.15, stdev=13832.13 00:30:41.627 clat (usec): min=13786, max=34176, avg=31764.34, stdev=1651.04 00:30:41.627 lat (usec): min=13799, max=34187, avg=31781.87, stdev=1651.55 00:30:41.627 clat percentiles (usec): 00:30:41.627 | 1.00th=[23200], 5.00th=[30278], 10.00th=[31065], 20.00th=[31327], 00:30:41.627 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:41.627 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:30:41.627 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:30:41.627 | 99.99th=[34341] 00:30:41.627 bw ( KiB/s): min= 1920, max= 2048, per=4.21%, avg=2003.10, stdev=62.25, samples=20 00:30:41.627 iops : min= 480, max= 512, avg=500.70, stdev=15.59, samples=20 00:30:41.627 lat (msec) : 20=0.96%, 50=99.04% 00:30:41.627 cpu : usr=95.97%, sys=1.87%, ctx=70, majf=0, minf=59 00:30:41.627 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:41.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.627 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.627 filename1: (groupid=0, jobs=1): err= 0: pid=457840: Thu Jul 25 15:26:32 2024 00:30:41.627 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10005msec) 00:30:41.627 slat (nsec): min=5398, max=90965, avg=14635.30, stdev=13111.06 00:30:41.627 clat (usec): min=8691, max=63426, avg=32634.21, stdev=3822.57 00:30:41.627 lat (usec): min=8697, max=63436, avg=32648.85, stdev=3821.84 00:30:41.627 clat percentiles (usec): 00:30:41.627 | 1.00th=[22676], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:30:41.627 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.627 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[40633], 00:30:41.627 | 99.00th=[47449], 99.50th=[55837], 99.90th=[63177], 99.95th=[63177], 00:30:41.627 | 99.99th=[63177] 00:30:41.627 bw ( KiB/s): min= 1752, max= 2048, per=4.09%, avg=1947.11, stdev=91.68, samples=19 00:30:41.627 iops : min= 438, max= 512, avg=486.74, stdev=22.88, samples=19 00:30:41.627 lat (msec) : 10=0.18%, 20=0.37%, 50=98.51%, 100=0.94% 00:30:41.628 cpu : usr=99.08%, sys=0.62%, ctx=18, majf=0, minf=41 00:30:41.628 IO depths : 1=2.1%, 2=4.3%, 4=11.1%, 8=71.6%, 16=10.9%, 32=0.0%, >=64=0.0% 00:30:41.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 complete : 0=0.0%, 4=90.8%, 8=3.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 issued rwts: total=4888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.628 filename1: (groupid=0, jobs=1): err= 0: pid=457841: Thu Jul 25 15:26:32 2024 00:30:41.628 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.4MiB/10019msec) 00:30:41.628 slat (usec): min=5, max=213, avg=20.32, stdev=15.22 00:30:41.628 clat (usec): min=14740, max=52329, avg=32155.77, stdev=3487.44 00:30:41.628 lat (usec): min=14762, max=52353, avg=32176.09, stdev=3486.10 00:30:41.628 clat percentiles (usec): 00:30:41.628 | 1.00th=[17957], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:30:41.628 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:41.628 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[37487], 00:30:41.628 | 99.00th=[47973], 99.50th=[50594], 99.90th=[51643], 99.95th=[52167], 00:30:41.628 | 99.99th=[52167] 00:30:41.628 bw ( KiB/s): min= 1792, max= 2096, per=4.15%, avg=1977.20, stdev=74.11, samples=20 00:30:41.628 iops : min= 448, max= 524, avg=494.30, stdev=18.53, samples=20 00:30:41.628 lat (msec) : 20=1.23%, 50=98.21%, 100=0.56% 00:30:41.628 cpu : usr=91.18%, sys=4.15%, ctx=245, majf=0, minf=52 00:30:41.628 IO depths : 1=4.3%, 2=8.5%, 4=18.2%, 8=59.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:30:41.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 complete : 0=0.0%, 4=92.7%, 8=2.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 issued rwts: total=4959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.628 filename1: (groupid=0, jobs=1): err= 0: pid=457842: Thu Jul 25 15:26:32 2024 00:30:41.628 read: IOPS=485, BW=1944KiB/s (1990kB/s)(19.0MiB/10006msec) 00:30:41.628 slat (usec): min=5, max=105, avg=18.56, stdev=13.92 00:30:41.628 clat (usec): min=5687, max=56526, avg=32791.12, stdev=4827.59 00:30:41.628 lat (usec): min=5693, max=56542, avg=32809.68, stdev=4827.07 00:30:41.628 clat percentiles (usec): 00:30:41.628 | 1.00th=[19006], 5.00th=[27919], 10.00th=[30802], 20.00th=[31327], 00:30:41.628 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.628 | 70.00th=[32375], 80.00th=[32900], 90.00th=[38011], 95.00th=[42730], 00:30:41.628 | 99.00th=[52691], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:30:41.628 | 99.99th=[56361] 00:30:41.628 bw ( KiB/s): min= 1776, max= 2048, per=4.07%, avg=1938.68, stdev=88.39, samples=19 00:30:41.628 iops : min= 444, max= 512, avg=484.63, stdev=22.05, samples=19 00:30:41.628 lat (msec) : 10=0.21%, 20=1.09%, 50=97.26%, 100=1.44% 00:30:41.628 cpu : usr=99.00%, sys=0.67%, ctx=18, majf=0, minf=38 00:30:41.628 IO depths : 1=1.5%, 2=4.8%, 4=15.4%, 8=66.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:30:41.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 complete : 0=0.0%, 4=92.0%, 8=3.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 issued rwts: total=4862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.628 filename1: (groupid=0, jobs=1): err= 0: pid=457843: Thu Jul 25 15:26:32 2024 00:30:41.628 read: IOPS=464, BW=1858KiB/s (1903kB/s)(18.2MiB/10046msec) 00:30:41.628 slat (nsec): min=5506, max=95046, avg=16186.91, stdev=14132.88 00:30:41.628 clat (usec): min=12523, max=69597, avg=34280.58, stdev=6366.71 00:30:41.628 lat (usec): min=12529, max=69620, avg=34296.77, stdev=6365.46 00:30:41.628 clat percentiles (usec): 00:30:41.628 | 1.00th=[19268], 5.00th=[26084], 10.00th=[30540], 20.00th=[31589], 00:30:41.628 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:41.628 | 70.00th=[33162], 80.00th=[39060], 90.00th=[43254], 95.00th=[46400], 00:30:41.628 | 99.00th=[54789], 99.50th=[55837], 99.90th=[69731], 99.95th=[69731], 00:30:41.628 | 99.99th=[69731] 00:30:41.628 bw ( KiB/s): min= 1640, max= 2048, per=3.91%, avg=1862.60, stdev=96.23, samples=20 00:30:41.628 iops : min= 410, max= 512, avg=465.65, stdev=24.06, samples=20 00:30:41.628 lat (msec) : 20=1.37%, 50=95.74%, 100=2.89% 00:30:41.628 cpu : usr=98.93%, sys=0.75%, ctx=16, majf=0, minf=43 00:30:41.628 IO depths : 1=1.5%, 2=3.0%, 4=11.2%, 8=71.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:41.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 complete : 0=0.0%, 4=91.0%, 8=5.5%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 issued rwts: total=4667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.628 filename1: (groupid=0, jobs=1): err= 0: pid=457844: Thu Jul 25 15:26:32 2024 00:30:41.628 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.6MiB/10021msec) 00:30:41.628 slat (nsec): min=5536, max=93723, avg=18064.38, stdev=14870.51 00:30:41.628 clat (usec): min=10790, max=54736, avg=31891.27, stdev=4155.23 00:30:41.628 lat (usec): min=10816, max=54764, avg=31909.33, stdev=4156.16 00:30:41.628 clat percentiles (usec): 00:30:41.628 | 1.00th=[19268], 5.00th=[23987], 10.00th=[30278], 20.00th=[31327], 00:30:41.628 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:41.628 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[36439], 00:30:41.628 | 99.00th=[50070], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:30:41.628 | 99.99th=[54789] 00:30:41.628 bw ( KiB/s): min= 1788, max= 2128, per=4.19%, avg=1996.00, stdev=82.84, samples=20 00:30:41.628 iops : min= 447, max= 532, avg=499.00, stdev=20.71, samples=20 00:30:41.628 lat (msec) : 20=1.32%, 50=97.48%, 100=1.20% 00:30:41.628 cpu : usr=99.19%, sys=0.51%, ctx=35, majf=0, minf=38 00:30:41.628 IO depths : 1=2.5%, 2=7.6%, 4=21.3%, 8=58.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:30:41.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 complete : 0=0.0%, 4=93.4%, 8=1.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.628 filename2: (groupid=0, jobs=1): err= 0: pid=457846: Thu Jul 25 15:26:32 2024 00:30:41.628 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10025msec) 00:30:41.628 slat (nsec): min=5538, max=79603, avg=11054.36, stdev=8823.47 00:30:41.628 clat (usec): min=11331, max=58063, avg=31367.89, stdev=5134.23 00:30:41.628 lat (usec): min=11342, max=58070, avg=31378.95, stdev=5134.53 00:30:41.628 clat percentiles (usec): 00:30:41.628 | 1.00th=[15401], 5.00th=[20841], 10.00th=[24773], 20.00th=[31065], 00:30:41.628 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:41.628 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[38536], 00:30:41.628 | 99.00th=[50070], 99.50th=[51643], 99.90th=[55837], 99.95th=[55837], 00:30:41.628 | 99.99th=[57934] 00:30:41.628 bw ( KiB/s): min= 1920, max= 2288, per=4.27%, avg=2034.40, stdev=101.76, samples=20 00:30:41.628 iops : min= 480, max= 572, avg=508.60, stdev=25.44, samples=20 00:30:41.628 lat (msec) : 20=4.26%, 50=94.72%, 100=1.02% 00:30:41.628 cpu : usr=99.19%, sys=0.49%, ctx=66, majf=0, minf=37 00:30:41.628 IO depths : 1=3.5%, 2=7.9%, 4=19.9%, 8=59.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:30:41.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 complete : 0=0.0%, 4=92.9%, 8=1.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 issued rwts: total=5096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.628 filename2: (groupid=0, jobs=1): err= 0: pid=457847: Thu Jul 25 15:26:32 2024 00:30:41.628 read: IOPS=503, BW=2014KiB/s (2062kB/s)(19.7MiB/10019msec) 00:30:41.628 slat (usec): min=5, max=100, avg=16.93, stdev=14.15 00:30:41.628 clat (usec): min=12259, max=56421, avg=31649.34, stdev=3281.05 00:30:41.628 lat (usec): min=12268, max=56429, avg=31666.27, stdev=3282.08 00:30:41.628 clat percentiles (usec): 00:30:41.628 | 1.00th=[20579], 5.00th=[25035], 10.00th=[30278], 20.00th=[31327], 00:30:41.628 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:41.628 | 70.00th=[32113], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:30:41.628 | 99.00th=[42206], 99.50th=[47449], 99.90th=[56361], 99.95th=[56361], 00:30:41.628 | 99.99th=[56361] 00:30:41.628 bw ( KiB/s): min= 1904, max= 2176, per=4.22%, avg=2010.95, stdev=70.50, samples=20 00:30:41.628 iops : min= 476, max= 544, avg=502.70, stdev=17.61, samples=20 00:30:41.628 lat (msec) : 20=0.83%, 50=98.73%, 100=0.44% 00:30:41.628 cpu : usr=99.09%, sys=0.58%, ctx=32, majf=0, minf=42 00:30:41.628 IO depths : 1=3.1%, 2=8.7%, 4=23.1%, 8=55.5%, 16=9.6%, 32=0.0%, >=64=0.0% 00:30:41.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 issued rwts: total=5044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.628 filename2: (groupid=0, jobs=1): err= 0: pid=457848: Thu Jul 25 15:26:32 2024 00:30:41.628 read: IOPS=508, BW=2034KiB/s (2083kB/s)(19.9MiB/10005msec) 00:30:41.628 slat (nsec): min=5435, max=93242, avg=17360.24, stdev=13297.23 00:30:41.628 clat (usec): min=5184, max=62457, avg=31321.27, stdev=3711.65 00:30:41.628 lat (usec): min=5190, max=62463, avg=31338.63, stdev=3713.49 00:30:41.628 clat percentiles (usec): 00:30:41.628 | 1.00th=[18220], 5.00th=[22938], 10.00th=[30278], 20.00th=[31327], 00:30:41.628 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:41.628 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:30:41.628 | 99.00th=[39584], 99.50th=[50070], 99.90th=[62653], 99.95th=[62653], 00:30:41.628 | 99.99th=[62653] 00:30:41.628 bw ( KiB/s): min= 1792, max= 2752, per=4.26%, avg=2029.53, stdev=189.46, samples=19 00:30:41.628 iops : min= 448, max= 688, avg=507.26, stdev=47.36, samples=19 00:30:41.628 lat (msec) : 10=0.20%, 20=1.81%, 50=97.44%, 100=0.55% 00:30:41.628 cpu : usr=99.10%, sys=0.59%, ctx=47, majf=0, minf=36 00:30:41.628 IO depths : 1=3.1%, 2=8.7%, 4=22.8%, 8=55.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:30:41.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.628 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.629 filename2: (groupid=0, jobs=1): err= 0: pid=457849: Thu Jul 25 15:26:32 2024 00:30:41.629 read: IOPS=498, BW=1995KiB/s (2042kB/s)(19.5MiB/10011msec) 00:30:41.629 slat (nsec): min=5549, max=74021, avg=10371.51, stdev=7475.15 00:30:41.629 clat (usec): min=15688, max=48290, avg=31991.64, stdev=1600.96 00:30:41.629 lat (usec): min=15704, max=48304, avg=32002.02, stdev=1601.04 00:30:41.629 clat percentiles (usec): 00:30:41.629 | 1.00th=[29754], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:30:41.629 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.629 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:30:41.629 | 99.00th=[34341], 99.50th=[35390], 99.90th=[47449], 99.95th=[47449], 00:30:41.629 | 99.99th=[48497] 00:30:41.629 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1987.16, stdev=78.50, samples=19 00:30:41.629 iops : min= 448, max= 512, avg=496.79, stdev=19.63, samples=19 00:30:41.629 lat (msec) : 20=0.40%, 50=99.60% 00:30:41.629 cpu : usr=98.95%, sys=0.70%, ctx=89, majf=0, minf=48 00:30:41.629 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:41.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.629 filename2: (groupid=0, jobs=1): err= 0: pid=457850: Thu Jul 25 15:26:32 2024 00:30:41.629 read: IOPS=506, BW=2026KiB/s (2075kB/s)(19.8MiB/10006msec) 00:30:41.629 slat (nsec): min=5546, max=86401, avg=14516.02, stdev=11246.51 00:30:41.629 clat (usec): min=10489, max=54684, avg=31482.28, stdev=5671.61 00:30:41.629 lat (usec): min=10495, max=54691, avg=31496.79, stdev=5671.94 00:30:41.629 clat percentiles (usec): 00:30:41.629 | 1.00th=[17433], 5.00th=[21103], 10.00th=[23987], 20.00th=[30016], 00:30:41.629 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:41.629 | 70.00th=[32375], 80.00th=[32637], 90.00th=[35914], 95.00th=[41157], 00:30:41.629 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53216], 99.95th=[54789], 00:30:41.629 | 99.99th=[54789] 00:30:41.629 bw ( KiB/s): min= 1792, max= 2219, per=4.23%, avg=2014.16, stdev=118.18, samples=19 00:30:41.629 iops : min= 448, max= 554, avg=503.42, stdev=29.41, samples=19 00:30:41.629 lat (msec) : 20=3.35%, 50=94.91%, 100=1.74% 00:30:41.629 cpu : usr=95.90%, sys=2.06%, ctx=59, majf=0, minf=36 00:30:41.629 IO depths : 1=2.2%, 2=5.1%, 4=14.1%, 8=67.3%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:41.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 complete : 0=0.0%, 4=91.5%, 8=3.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 issued rwts: total=5068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.629 filename2: (groupid=0, jobs=1): err= 0: pid=457851: Thu Jul 25 15:26:32 2024 00:30:41.629 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10016msec) 00:30:41.629 slat (nsec): min=5196, max=80100, avg=20668.62, stdev=13995.37 00:30:41.629 clat (usec): min=13790, max=45439, avg=31810.81, stdev=1477.48 00:30:41.629 lat (usec): min=13800, max=45463, avg=31831.48, stdev=1478.20 00:30:41.629 clat percentiles (usec): 00:30:41.629 | 1.00th=[26346], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:30:41.629 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:41.629 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:30:41.629 | 99.00th=[33817], 99.50th=[35914], 99.90th=[43254], 99.95th=[44827], 00:30:41.629 | 99.99th=[45351] 00:30:41.629 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1996.70, stdev=63.95, samples=20 00:30:41.629 iops : min= 480, max= 512, avg=499.10, stdev=16.01, samples=20 00:30:41.629 lat (msec) : 20=0.40%, 50=99.60% 00:30:41.629 cpu : usr=99.13%, sys=0.56%, ctx=57, majf=0, minf=34 00:30:41.629 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:30:41.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.629 filename2: (groupid=0, jobs=1): err= 0: pid=457852: Thu Jul 25 15:26:32 2024 00:30:41.629 read: IOPS=499, BW=2000KiB/s (2048kB/s)(19.6MiB/10018msec) 00:30:41.629 slat (nsec): min=5561, max=84256, avg=19628.09, stdev=14001.81 00:30:41.629 clat (usec): min=18859, max=34119, avg=31830.10, stdev=1111.35 00:30:41.629 lat (usec): min=18868, max=34128, avg=31849.72, stdev=1111.11 00:30:41.629 clat percentiles (usec): 00:30:41.629 | 1.00th=[29230], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:30:41.629 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:41.629 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:30:41.629 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:30:41.629 | 99.99th=[34341] 00:30:41.629 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1996.70, stdev=63.95, samples=20 00:30:41.629 iops : min= 480, max= 512, avg=499.10, stdev=16.01, samples=20 00:30:41.629 lat (msec) : 20=0.32%, 50=99.68% 00:30:41.629 cpu : usr=99.30%, sys=0.43%, ctx=15, majf=0, minf=31 00:30:41.629 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:41.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.629 filename2: (groupid=0, jobs=1): err= 0: pid=457854: Thu Jul 25 15:26:32 2024 00:30:41.629 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10006msec) 00:30:41.629 slat (usec): min=5, max=105, avg=14.85, stdev=13.60 00:30:41.629 clat (usec): min=9537, max=63313, avg=32478.79, stdev=3506.12 00:30:41.629 lat (usec): min=9542, max=63320, avg=32493.64, stdev=3505.38 00:30:41.629 clat percentiles (usec): 00:30:41.629 | 1.00th=[23462], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:30:41.629 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:41.629 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[36963], 00:30:41.629 | 99.00th=[45351], 99.50th=[55837], 99.90th=[63177], 99.95th=[63177], 00:30:41.629 | 99.99th=[63177] 00:30:41.629 bw ( KiB/s): min= 1708, max= 2048, per=4.11%, avg=1956.79, stdev=95.56, samples=19 00:30:41.629 iops : min= 427, max= 512, avg=489.16, stdev=23.85, samples=19 00:30:41.629 lat (msec) : 10=0.12%, 20=0.41%, 50=98.68%, 100=0.79% 00:30:41.629 cpu : usr=99.14%, sys=0.55%, ctx=13, majf=0, minf=28 00:30:41.629 IO depths : 1=2.2%, 2=4.3%, 4=12.9%, 8=70.0%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:41.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 complete : 0=0.0%, 4=90.7%, 8=3.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.629 issued rwts: total=4911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:41.629 00:30:41.629 Run status group 0 (all jobs): 00:30:41.629 READ: bw=46.5MiB/s (48.7MB/s), 1858KiB/s-2077KiB/s (1903kB/s-2127kB/s), io=467MiB (489MB), run=10005-10046msec 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.629 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.630 bdev_null0 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.630 [2024-07-25 15:26:32.323534] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.630 bdev_null1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:41.630 { 00:30:41.630 "params": { 00:30:41.630 "name": "Nvme$subsystem", 00:30:41.630 "trtype": "$TEST_TRANSPORT", 00:30:41.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:41.630 "adrfam": "ipv4", 00:30:41.630 "trsvcid": "$NVMF_PORT", 00:30:41.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:41.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:41.630 "hdgst": ${hdgst:-false}, 00:30:41.630 "ddgst": ${ddgst:-false} 00:30:41.630 }, 00:30:41.630 "method": "bdev_nvme_attach_controller" 00:30:41.630 } 00:30:41.630 EOF 00:30:41.630 )") 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:41.630 { 00:30:41.630 "params": { 00:30:41.630 "name": "Nvme$subsystem", 00:30:41.630 "trtype": "$TEST_TRANSPORT", 00:30:41.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:41.630 "adrfam": "ipv4", 00:30:41.630 "trsvcid": "$NVMF_PORT", 00:30:41.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:41.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:41.630 "hdgst": ${hdgst:-false}, 00:30:41.630 "ddgst": ${ddgst:-false} 00:30:41.630 }, 00:30:41.630 "method": "bdev_nvme_attach_controller" 00:30:41.630 } 00:30:41.630 EOF 00:30:41.630 )") 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:41.630 "params": { 00:30:41.630 "name": "Nvme0", 00:30:41.630 "trtype": "tcp", 00:30:41.630 "traddr": "10.0.0.2", 00:30:41.630 "adrfam": "ipv4", 00:30:41.630 "trsvcid": "4420", 00:30:41.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:41.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:41.630 "hdgst": false, 00:30:41.630 "ddgst": false 00:30:41.630 }, 00:30:41.630 "method": "bdev_nvme_attach_controller" 00:30:41.630 },{ 00:30:41.630 "params": { 00:30:41.630 "name": "Nvme1", 00:30:41.630 "trtype": "tcp", 00:30:41.630 "traddr": "10.0.0.2", 00:30:41.630 "adrfam": "ipv4", 00:30:41.630 "trsvcid": "4420", 00:30:41.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:41.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:41.630 "hdgst": false, 00:30:41.630 "ddgst": false 00:30:41.630 }, 00:30:41.630 "method": "bdev_nvme_attach_controller" 00:30:41.630 }' 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:41.630 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.631 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.631 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:41.631 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:41.631 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:41.631 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:41.631 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:41.631 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.631 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:41.631 ... 00:30:41.631 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:41.631 ... 00:30:41.631 fio-3.35 00:30:41.631 Starting 4 threads 00:30:41.631 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.956 00:30:46.956 filename0: (groupid=0, jobs=1): err= 0: pid=460180: Thu Jul 25 15:26:38 2024 00:30:46.956 read: IOPS=1857, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5003msec) 00:30:46.956 slat (nsec): min=5356, max=27720, avg=6037.65, stdev=2022.41 00:30:46.956 clat (usec): min=2017, max=48143, avg=4291.97, stdev=1499.17 00:30:46.956 lat (usec): min=2022, max=48169, avg=4298.01, stdev=1499.34 00:30:46.956 clat percentiles (usec): 00:30:46.956 | 1.00th=[ 2671], 5.00th=[ 3097], 10.00th=[ 3261], 20.00th=[ 3589], 00:30:46.956 | 30.00th=[ 3818], 40.00th=[ 4047], 50.00th=[ 4228], 60.00th=[ 4424], 00:30:46.956 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5276], 95.00th=[ 5604], 00:30:46.956 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 7635], 99.95th=[47973], 00:30:46.956 | 99.99th=[47973] 00:30:46.956 bw ( KiB/s): min=13675, max=15456, per=22.70%, avg=14842.11, stdev=529.80, samples=9 00:30:46.956 iops : min= 1709, max= 1932, avg=1855.22, stdev=66.33, samples=9 00:30:46.956 lat (msec) : 4=38.32%, 10=61.60%, 50=0.09% 00:30:46.956 cpu : usr=97.66%, sys=2.02%, ctx=9, majf=0, minf=59 00:30:46.956 IO depths : 1=0.2%, 2=1.9%, 4=66.8%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.956 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.956 issued rwts: total=9291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.956 filename0: (groupid=0, jobs=1): err= 0: pid=460181: Thu Jul 25 15:26:38 2024 00:30:46.956 read: IOPS=1645, BW=12.9MiB/s (13.5MB/s)(64.3MiB/5004msec) 00:30:46.956 slat (nsec): min=5354, max=28624, avg=6264.75, stdev=2333.71 00:30:46.956 clat (usec): min=2634, max=48575, avg=4844.33, stdev=2429.41 00:30:46.956 lat (usec): min=2640, max=48601, avg=4850.59, stdev=2429.52 00:30:46.956 clat percentiles (usec): 00:30:46.956 | 1.00th=[ 2999], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3982], 00:30:46.956 | 30.00th=[ 4228], 40.00th=[ 4490], 50.00th=[ 4686], 60.00th=[ 4883], 00:30:46.956 | 70.00th=[ 5145], 80.00th=[ 5473], 90.00th=[ 5866], 95.00th=[ 6194], 00:30:46.956 | 99.00th=[ 7111], 99.50th=[ 7635], 99.90th=[46400], 99.95th=[48497], 00:30:46.956 | 99.99th=[48497] 00:30:46.956 bw ( KiB/s): min=11280, max=13808, per=20.09%, avg=13137.78, stdev=747.49, samples=9 00:30:46.956 iops : min= 1410, max= 1726, avg=1642.22, stdev=93.44, samples=9 00:30:46.956 lat (msec) : 4=20.05%, 10=79.63%, 20=0.02%, 50=0.29% 00:30:46.956 cpu : usr=96.54%, sys=2.56%, ctx=302, majf=0, minf=42 00:30:46.956 IO depths : 1=0.2%, 2=1.3%, 4=68.0%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.956 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.956 issued rwts: total=8234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.956 filename1: (groupid=0, jobs=1): err= 0: pid=460182: Thu Jul 25 15:26:38 2024 00:30:46.956 read: IOPS=2695, BW=21.1MiB/s (22.1MB/s)(105MiB/5003msec) 00:30:46.956 slat (nsec): min=5356, max=28954, avg=7103.86, stdev=2031.38 00:30:46.956 clat (usec): min=681, max=6276, avg=2945.33, stdev=580.61 00:30:46.956 lat (usec): min=689, max=6282, avg=2952.44, stdev=580.63 00:30:46.956 clat percentiles (usec): 00:30:46.956 | 1.00th=[ 1532], 5.00th=[ 1991], 10.00th=[ 2245], 20.00th=[ 2507], 00:30:46.956 | 30.00th=[ 2671], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 3064], 00:30:46.956 | 70.00th=[ 3195], 80.00th=[ 3392], 90.00th=[ 3654], 95.00th=[ 3916], 00:30:46.956 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5145], 99.95th=[ 5342], 00:30:46.956 | 99.99th=[ 6259] 00:30:46.956 bw ( KiB/s): min=20944, max=22304, per=32.96%, avg=21550.22, stdev=363.84, samples=9 00:30:46.957 iops : min= 2618, max= 2788, avg=2693.78, stdev=45.48, samples=9 00:30:46.957 lat (usec) : 750=0.01%, 1000=0.10% 00:30:46.957 lat (msec) : 2=4.92%, 4=91.06%, 10=3.91% 00:30:46.957 cpu : usr=97.24%, sys=2.42%, ctx=5, majf=0, minf=48 00:30:46.957 IO depths : 1=0.5%, 2=5.2%, 4=66.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.957 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.957 issued rwts: total=13488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.957 filename1: (groupid=0, jobs=1): err= 0: pid=460183: Thu Jul 25 15:26:38 2024 00:30:46.957 read: IOPS=1976, BW=15.4MiB/s (16.2MB/s)(77.2MiB/5002msec) 00:30:46.957 slat (nsec): min=2782, max=27148, avg=5902.52, stdev=1525.53 00:30:46.957 clat (usec): min=1853, max=6666, avg=4030.81, stdev=724.94 00:30:46.957 lat (usec): min=1858, max=6672, avg=4036.71, stdev=724.92 00:30:46.957 clat percentiles (usec): 00:30:46.957 | 1.00th=[ 2409], 5.00th=[ 2868], 10.00th=[ 3130], 20.00th=[ 3425], 00:30:46.957 | 30.00th=[ 3654], 40.00th=[ 3818], 50.00th=[ 4015], 60.00th=[ 4178], 00:30:46.957 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5276], 00:30:46.957 | 99.00th=[ 5800], 99.50th=[ 6063], 99.90th=[ 6325], 99.95th=[ 6390], 00:30:46.957 | 99.99th=[ 6652] 00:30:46.957 bw ( KiB/s): min=15072, max=17101, per=24.24%, avg=15848.56, stdev=552.84, samples=9 00:30:46.957 iops : min= 1884, max= 2137, avg=1981.00, stdev=68.93, samples=9 00:30:46.957 lat (msec) : 2=0.17%, 4=48.92%, 10=50.91% 00:30:46.957 cpu : usr=97.18%, sys=2.54%, ctx=10, majf=0, minf=20 00:30:46.957 IO depths : 1=0.1%, 2=1.2%, 4=69.2%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.957 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.957 issued rwts: total=9887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.957 00:30:46.957 Run status group 0 (all jobs): 00:30:46.957 READ: bw=63.9MiB/s (67.0MB/s), 12.9MiB/s-21.1MiB/s (13.5MB/s-22.1MB/s), io=320MiB (335MB), run=5002-5004msec 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.957 00:30:46.957 real 0m24.268s 00:30:46.957 user 5m14.718s 00:30:46.957 sys 0m4.301s 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 ************************************ 00:30:46.957 END TEST fio_dif_rand_params 00:30:46.957 ************************************ 00:30:46.957 15:26:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:46.957 15:26:38 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:46.957 15:26:38 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 ************************************ 00:30:46.957 START TEST fio_dif_digest 00:30:46.957 ************************************ 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 bdev_null0 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.957 [2024-07-25 15:26:38.808598] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.957 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:46.958 { 00:30:46.958 "params": { 00:30:46.958 "name": "Nvme$subsystem", 00:30:46.958 "trtype": "$TEST_TRANSPORT", 00:30:46.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.958 "adrfam": "ipv4", 00:30:46.958 "trsvcid": "$NVMF_PORT", 00:30:46.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.958 "hdgst": ${hdgst:-false}, 00:30:46.958 "ddgst": ${ddgst:-false} 00:30:46.958 }, 00:30:46.958 "method": "bdev_nvme_attach_controller" 00:30:46.958 } 00:30:46.958 EOF 00:30:46.958 )") 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:46.958 "params": { 00:30:46.958 "name": "Nvme0", 00:30:46.958 "trtype": "tcp", 00:30:46.958 "traddr": "10.0.0.2", 00:30:46.958 "adrfam": "ipv4", 00:30:46.958 "trsvcid": "4420", 00:30:46.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.958 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:46.958 "hdgst": true, 00:30:46.958 "ddgst": true 00:30:46.958 }, 00:30:46.958 "method": "bdev_nvme_attach_controller" 00:30:46.958 }' 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:46.958 15:26:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.226 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:47.226 ... 00:30:47.226 fio-3.35 00:30:47.226 Starting 3 threads 00:30:47.226 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.429 00:30:59.429 filename0: (groupid=0, jobs=1): err= 0: pid=461619: Thu Jul 25 15:26:49 2024 00:30:59.429 read: IOPS=130, BW=16.3MiB/s (17.1MB/s)(164MiB/10047msec) 00:30:59.429 slat (nsec): min=5960, max=41555, avg=9714.23, stdev=2442.51 00:30:59.429 clat (msec): min=9, max=141, avg=22.94, stdev=16.87 00:30:59.429 lat (msec): min=9, max=141, avg=22.95, stdev=16.87 00:30:59.429 clat percentiles (msec): 00:30:59.429 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:30:59.429 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 17], 00:30:59.430 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 57], 95.00th=[ 59], 00:30:59.430 | 99.00th=[ 62], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 142], 00:30:59.430 | 99.99th=[ 142] 00:30:59.430 bw ( KiB/s): min=11264, max=21504, per=23.77%, avg=16768.00, stdev=2730.18, samples=20 00:30:59.430 iops : min= 88, max= 168, avg=131.00, stdev=21.33, samples=20 00:30:59.430 lat (msec) : 10=0.23%, 20=81.04%, 50=1.60%, 100=16.83%, 250=0.30% 00:30:59.430 cpu : usr=95.61%, sys=4.07%, ctx=15, majf=0, minf=116 00:30:59.430 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.430 issued rwts: total=1313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.430 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:59.430 filename0: (groupid=0, jobs=1): err= 0: pid=461620: Thu Jul 25 15:26:49 2024 00:30:59.430 read: IOPS=122, BW=15.3MiB/s (16.0MB/s)(154MiB/10047msec) 00:30:59.430 slat (nsec): min=5595, max=37331, avg=6540.81, stdev=1260.24 00:30:59.430 clat (usec): min=10445, max=99357, avg=24513.16, stdev=17916.67 00:30:59.430 lat (usec): min=10451, max=99364, avg=24519.70, stdev=17916.70 00:30:59.430 clat percentiles (usec): 00:30:59.430 | 1.00th=[11863], 5.00th=[12911], 10.00th=[13566], 20.00th=[14353], 00:30:59.430 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16057], 60.00th=[16712], 00:30:59.430 | 70.00th=[17433], 80.00th=[54264], 90.00th=[56886], 95.00th=[57934], 00:30:59.430 | 99.00th=[60031], 99.50th=[96994], 99.90th=[99091], 99.95th=[99091], 00:30:59.430 | 99.99th=[99091] 00:30:59.430 bw ( KiB/s): min=10496, max=20736, per=22.24%, avg=15692.80, stdev=2500.15, samples=20 00:30:59.430 iops : min= 82, max= 162, avg=122.60, stdev=19.53, samples=20 00:30:59.430 lat (msec) : 20=78.60%, 50=0.24%, 100=21.16% 00:30:59.430 cpu : usr=96.69%, sys=3.06%, ctx=17, majf=0, minf=118 00:30:59.430 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.430 issued rwts: total=1229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.430 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:59.430 filename0: (groupid=0, jobs=1): err= 0: pid=461621: Thu Jul 25 15:26:49 2024 00:30:59.430 read: IOPS=299, BW=37.4MiB/s (39.3MB/s)(375MiB/10004msec) 00:30:59.430 slat (nsec): min=5636, max=69548, avg=6688.32, stdev=1854.95 00:30:59.430 clat (usec): min=5677, max=94662, avg=10008.09, stdev=3160.06 00:30:59.430 lat (usec): min=5684, max=94671, avg=10014.78, stdev=3160.30 00:30:59.430 clat percentiles (usec): 00:30:59.430 | 1.00th=[ 6521], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8356], 00:30:59.430 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10290], 00:30:59.430 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12256], 95.00th=[12780], 00:30:59.430 | 99.00th=[13960], 99.50th=[14877], 99.90th=[54789], 99.95th=[55837], 00:30:59.430 | 99.99th=[94897] 00:30:59.430 bw ( KiB/s): min=29952, max=42752, per=54.33%, avg=38332.63, stdev=3161.76, samples=19 00:30:59.430 iops : min= 234, max= 334, avg=299.47, stdev=24.70, samples=19 00:30:59.430 lat (msec) : 10=55.31%, 20=44.33%, 50=0.10%, 100=0.27% 00:30:59.430 cpu : usr=95.70%, sys=3.76%, ctx=30, majf=0, minf=178 00:30:59.430 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.430 issued rwts: total=2996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.430 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:59.430 00:30:59.430 Run status group 0 (all jobs): 00:30:59.430 READ: bw=68.9MiB/s (72.2MB/s), 15.3MiB/s-37.4MiB/s (16.0MB/s-39.3MB/s), io=692MiB (726MB), run=10004-10047msec 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.430 00:30:59.430 real 0m11.222s 00:30:59.430 user 0m42.988s 00:30:59.430 sys 0m1.399s 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.430 15:26:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:59.430 ************************************ 00:30:59.430 END TEST fio_dif_digest 00:30:59.430 ************************************ 00:30:59.430 15:26:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:59.430 15:26:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:59.430 rmmod nvme_tcp 00:30:59.430 rmmod nvme_fabrics 00:30:59.430 rmmod nvme_keyring 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 451266 ']' 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 451266 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 451266 ']' 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 451266 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451266 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451266' 00:30:59.430 killing process with pid 451266 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@969 -- # kill 451266 00:30:59.430 15:26:50 nvmf_dif -- common/autotest_common.sh@974 -- # wait 451266 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:59.430 15:26:50 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:01.339 Waiting for block devices as requested 00:31:01.600 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:01.600 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:01.600 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:01.860 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:01.860 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:01.860 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:01.860 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:02.145 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:02.145 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:02.405 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:02.405 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:02.405 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:02.405 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:02.665 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:02.665 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:02.665 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:02.926 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:03.185 15:26:55 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:03.185 15:26:55 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:03.185 15:26:55 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.185 15:26:55 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:03.185 15:26:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.185 15:26:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:03.185 15:26:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.093 15:26:57 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:05.093 00:31:05.093 real 1m16.854s 00:31:05.093 user 7m59.641s 00:31:05.093 sys 0m19.675s 00:31:05.093 15:26:57 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.093 15:26:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.093 ************************************ 00:31:05.093 END TEST nvmf_dif 00:31:05.093 ************************************ 00:31:05.093 15:26:57 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:05.093 15:26:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:05.093 15:26:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.093 15:26:57 -- common/autotest_common.sh@10 -- # set +x 00:31:05.352 ************************************ 00:31:05.352 START TEST nvmf_abort_qd_sizes 00:31:05.352 ************************************ 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:05.352 * Looking for test storage... 00:31:05.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.352 15:26:57 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.353 15:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:13.485 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:13.485 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:13.485 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:13.485 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.485 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:31:13.486 00:31:13.486 --- 10.0.0.2 ping statistics --- 00:31:13.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.486 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:31:13.486 00:31:13.486 --- 10.0.0.1 ping statistics --- 00:31:13.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.486 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:13.486 15:27:04 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:15.401 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:15.401 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:15.662 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:15.662 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:15.662 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:15.662 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:15.662 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:15.922 15:27:07 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.922 15:27:07 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:15.922 15:27:07 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:15.922 15:27:07 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.922 15:27:07 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:15.922 15:27:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=470932 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 470932 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 470932 ']' 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:15.922 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:15.922 [2024-07-25 15:27:08.082690] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:15.922 [2024-07-25 15:27:08.082731] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.922 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.184 [2024-07-25 15:27:08.141654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.184 [2024-07-25 15:27:08.208746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.184 [2024-07-25 15:27:08.208783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.184 [2024-07-25 15:27:08.208791] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.184 [2024-07-25 15:27:08.208798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.184 [2024-07-25 15:27:08.208804] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.184 [2024-07-25 15:27:08.208938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.184 [2024-07-25 15:27:08.209053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.184 [2024-07-25 15:27:08.209223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.184 [2024-07-25 15:27:08.209224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.757 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:17.019 ************************************ 00:31:17.019 START TEST spdk_target_abort 00:31:17.019 ************************************ 00:31:17.019 15:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:31:17.019 15:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:17.019 15:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:17.019 15:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.019 15:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 spdk_targetn1 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 [2024-07-25 15:27:09.285306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 [2024-07-25 15:27:09.325574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:17.280 15:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:17.280 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.541 [2024-07-25 15:27:09.625626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:312 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:17.541 [2024-07-25 15:27:09.625654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:002a p:1 m:0 dnr:0 00:31:17.542 [2024-07-25 15:27:09.627324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:384 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:17.542 [2024-07-25 15:27:09.627338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0032 p:1 m:0 dnr:0 00:31:17.542 [2024-07-25 15:27:09.633878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:512 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:17.542 [2024-07-25 15:27:09.633892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0042 p:1 m:0 dnr:0 00:31:17.542 [2024-07-25 15:27:09.650626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:888 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:17.542 [2024-07-25 15:27:09.650641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0071 p:1 m:0 dnr:0 00:31:17.542 [2024-07-25 15:27:09.722668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2648 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:31:17.542 [2024-07-25 15:27:09.722686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:20.845 Initializing NVMe Controllers 00:31:20.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:20.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:20.845 Initialization complete. Launching workers. 00:31:20.845 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9380, failed: 5 00:31:20.845 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3302, failed to submit 6083 00:31:20.845 success 797, unsuccess 2505, failed 0 00:31:20.845 15:27:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:20.846 15:27:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.846 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.846 [2024-07-25 15:27:12.807596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:544 len:8 PRP1 0x200007c58000 PRP2 0x0 00:31:20.846 [2024-07-25 15:27:12.807631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:31:20.846 [2024-07-25 15:27:12.815269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:704 len:8 PRP1 0x200007c52000 PRP2 0x0 00:31:20.846 [2024-07-25 15:27:12.815291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:005b p:1 m:0 dnr:0 00:31:24.145 Initializing NVMe Controllers 00:31:24.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:24.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:24.145 Initialization complete. Launching workers. 00:31:24.145 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8665, failed: 2 00:31:24.145 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7451 00:31:24.145 success 341, unsuccess 875, failed 0 00:31:24.145 15:27:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:24.145 15:27:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:24.145 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.509 Initializing NVMe Controllers 00:31:27.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:27.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:27.509 Initialization complete. Launching workers. 00:31:27.509 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40078, failed: 0 00:31:27.509 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2678, failed to submit 37400 00:31:27.509 success 653, unsuccess 2025, failed 0 00:31:27.509 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:27.509 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.509 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.509 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.509 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:27.509 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.509 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 470932 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 470932 ']' 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 470932 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 470932 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 470932' 00:31:28.895 killing process with pid 470932 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 470932 00:31:28.895 15:27:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 470932 00:31:29.156 00:31:29.156 real 0m12.155s 00:31:29.156 user 0m49.254s 00:31:29.156 sys 0m2.027s 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.156 ************************************ 00:31:29.156 END TEST spdk_target_abort 00:31:29.156 ************************************ 00:31:29.156 15:27:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:29.156 15:27:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:29.156 15:27:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:29.156 15:27:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:29.156 ************************************ 00:31:29.156 START TEST kernel_target_abort 00:31:29.156 ************************************ 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:29.156 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:29.157 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:29.157 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:29.157 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:29.157 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:29.157 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:29.157 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:29.157 15:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:32.458 Waiting for block devices as requested 00:31:32.458 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:32.718 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:32.718 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:32.718 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:32.979 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:32.979 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:32.979 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:32.979 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:33.240 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:33.240 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:33.500 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:33.500 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:33.500 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:33.760 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:33.760 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:33.760 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:33.760 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:34.022 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:34.022 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:34.022 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:34.022 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:34.022 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:34.022 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:34.022 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:34.022 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:34.022 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:34.283 No valid GPT data, bailing 00:31:34.283 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:34.284 00:31:34.284 Discovery Log Number of Records 2, Generation counter 2 00:31:34.284 =====Discovery Log Entry 0====== 00:31:34.284 trtype: tcp 00:31:34.284 adrfam: ipv4 00:31:34.284 subtype: current discovery subsystem 00:31:34.284 treq: not specified, sq flow control disable supported 00:31:34.284 portid: 1 00:31:34.284 trsvcid: 4420 00:31:34.284 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:34.284 traddr: 10.0.0.1 00:31:34.284 eflags: none 00:31:34.284 sectype: none 00:31:34.284 =====Discovery Log Entry 1====== 00:31:34.284 trtype: tcp 00:31:34.284 adrfam: ipv4 00:31:34.284 subtype: nvme subsystem 00:31:34.284 treq: not specified, sq flow control disable supported 00:31:34.284 portid: 1 00:31:34.284 trsvcid: 4420 00:31:34.284 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:34.284 traddr: 10.0.0.1 00:31:34.284 eflags: none 00:31:34.284 sectype: none 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:34.284 15:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:34.284 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.587 Initializing NVMe Controllers 00:31:37.587 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:37.587 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:37.587 Initialization complete. Launching workers. 00:31:37.587 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37885, failed: 0 00:31:37.587 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37885, failed to submit 0 00:31:37.587 success 0, unsuccess 37885, failed 0 00:31:37.587 15:27:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:37.588 15:27:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:37.588 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.890 Initializing NVMe Controllers 00:31:40.890 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:40.890 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:40.890 Initialization complete. Launching workers. 00:31:40.890 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76870, failed: 0 00:31:40.890 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19362, failed to submit 57508 00:31:40.890 success 0, unsuccess 19362, failed 0 00:31:40.890 15:27:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:40.890 15:27:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:40.890 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.191 Initializing NVMe Controllers 00:31:44.192 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:44.192 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:44.192 Initialization complete. Launching workers. 00:31:44.192 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74437, failed: 0 00:31:44.192 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18574, failed to submit 55863 00:31:44.192 success 0, unsuccess 18574, failed 0 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:44.192 15:27:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:46.740 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:46.740 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:48.701 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:48.963 00:31:48.963 real 0m19.773s 00:31:48.963 user 0m7.112s 00:31:48.963 sys 0m6.364s 00:31:48.963 15:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:48.963 15:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.963 ************************************ 00:31:48.963 END TEST kernel_target_abort 00:31:48.963 ************************************ 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.963 rmmod nvme_tcp 00:31:48.963 rmmod nvme_fabrics 00:31:48.963 rmmod nvme_keyring 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 470932 ']' 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 470932 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 470932 ']' 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 470932 00:31:48.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (470932) - No such process 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 470932 is not found' 00:31:48.963 Process with pid 470932 is not found 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:48.963 15:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:51.513 Waiting for block devices as requested 00:31:51.513 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:51.773 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:51.773 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:51.773 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:52.034 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:52.034 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:52.034 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:52.294 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:52.294 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:52.294 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:52.555 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:52.556 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:52.556 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:52.816 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:52.816 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:52.816 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:52.816 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:53.077 15:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:53.077 15:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:53.077 15:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.077 15:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:53.077 15:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.077 15:27:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:53.077 15:27:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.627 15:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:55.627 00:31:55.627 real 0m50.007s 00:31:55.627 user 1m0.808s 00:31:55.627 sys 0m18.536s 00:31:55.627 15:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:55.627 15:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:55.627 ************************************ 00:31:55.627 END TEST nvmf_abort_qd_sizes 00:31:55.627 ************************************ 00:31:55.627 15:27:47 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:55.627 15:27:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:55.627 15:27:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:55.627 15:27:47 -- common/autotest_common.sh@10 -- # set +x 00:31:55.627 ************************************ 00:31:55.627 START TEST keyring_file 00:31:55.627 ************************************ 00:31:55.627 15:27:47 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:55.627 * Looking for test storage... 00:31:55.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.627 15:27:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.627 15:27:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.627 15:27:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.627 15:27:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.627 15:27:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.627 15:27:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.627 15:27:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:55.627 15:27:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vuPBRctLxn 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vuPBRctLxn 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vuPBRctLxn 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vuPBRctLxn 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QAOeyJENmk 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:55.627 15:27:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QAOeyJENmk 00:31:55.627 15:27:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QAOeyJENmk 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QAOeyJENmk 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=481559 00:31:55.627 15:27:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 481559 00:31:55.627 15:27:47 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 481559 ']' 00:31:55.627 15:27:47 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.627 15:27:47 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:55.627 15:27:47 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.627 15:27:47 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:55.627 15:27:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:55.627 [2024-07-25 15:27:47.646243] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:55.628 [2024-07-25 15:27:47.646297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481559 ] 00:31:55.628 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.628 [2024-07-25 15:27:47.705500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.628 [2024-07-25 15:27:47.770091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:56.574 15:27:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:56.574 [2024-07-25 15:27:48.437190] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.574 null0 00:31:56.574 [2024-07-25 15:27:48.469242] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:56.574 [2024-07-25 15:27:48.469503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:56.574 [2024-07-25 15:27:48.477246] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.574 15:27:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:56.574 [2024-07-25 15:27:48.493289] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:56.574 request: 00:31:56.574 { 00:31:56.574 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.574 "secure_channel": false, 00:31:56.574 "listen_address": { 00:31:56.574 "trtype": "tcp", 00:31:56.574 "traddr": "127.0.0.1", 00:31:56.574 "trsvcid": "4420" 00:31:56.574 }, 00:31:56.574 "method": "nvmf_subsystem_add_listener", 00:31:56.574 "req_id": 1 00:31:56.574 } 00:31:56.574 Got JSON-RPC error response 00:31:56.574 response: 00:31:56.574 { 00:31:56.574 "code": -32602, 00:31:56.574 "message": "Invalid parameters" 00:31:56.574 } 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:56.574 15:27:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=481862 00:31:56.574 15:27:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 481862 /var/tmp/bperf.sock 00:31:56.574 15:27:48 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 481862 ']' 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:56.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.574 15:27:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:56.574 [2024-07-25 15:27:48.549764] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:56.574 [2024-07-25 15:27:48.549815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481862 ] 00:31:56.574 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.574 [2024-07-25 15:27:48.625011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.574 [2024-07-25 15:27:48.689291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.146 15:27:49 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:57.146 15:27:49 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:57.146 15:27:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vuPBRctLxn 00:31:57.146 15:27:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vuPBRctLxn 00:31:57.407 15:27:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QAOeyJENmk 00:31:57.407 15:27:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QAOeyJENmk 00:31:57.669 15:27:49 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:57.669 15:27:49 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:57.669 15:27:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:57.669 15:27:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.669 15:27:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:57.669 15:27:49 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.vuPBRctLxn == \/\t\m\p\/\t\m\p\.\v\u\P\B\R\c\t\L\x\n ]] 00:31:57.669 15:27:49 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:57.669 15:27:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:57.669 15:27:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:57.669 15:27:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.669 15:27:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:57.930 15:27:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.QAOeyJENmk == \/\t\m\p\/\t\m\p\.\Q\A\O\e\y\J\E\N\m\k ]] 00:31:57.930 15:27:49 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:57.930 15:27:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:57.930 15:27:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:57.930 15:27:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:57.930 15:27:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.930 15:27:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:58.191 15:27:50 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:58.191 15:27:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:58.191 15:27:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:58.191 15:27:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.191 15:27:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.191 15:27:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:58.191 15:27:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.191 15:27:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:58.191 15:27:50 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:58.191 15:27:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:58.452 [2024-07-25 15:27:50.441929] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:58.452 nvme0n1 00:31:58.452 15:27:50 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:58.452 15:27:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:58.452 15:27:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.452 15:27:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.452 15:27:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:58.452 15:27:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.713 15:27:50 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:58.713 15:27:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:58.713 15:27:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:58.713 15:27:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.713 15:27:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.713 15:27:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:58.713 15:27:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.713 15:27:50 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:58.713 15:27:50 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:58.973 Running I/O for 1 seconds... 00:31:59.915 00:31:59.915 Latency(us) 00:31:59.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.915 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:59.915 nvme0n1 : 1.02 3005.58 11.74 0.00 0.00 42100.18 10431.15 149422.08 00:31:59.915 =================================================================================================================== 00:31:59.915 Total : 3005.58 11.74 0.00 0.00 42100.18 10431.15 149422.08 00:31:59.915 0 00:31:59.915 15:27:51 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:59.915 15:27:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:00.176 15:27:52 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.176 15:27:52 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:00.176 15:27:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.176 15:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.436 15:27:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:00.436 15:27:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:00.436 15:27:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:00.436 15:27:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:00.436 15:27:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:00.436 15:27:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.436 15:27:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:00.436 15:27:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.436 15:27:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:00.437 15:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:00.697 [2024-07-25 15:27:52.633929] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:00.697 [2024-07-25 15:27:52.634150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e12170 (107): Transport endpoint is not connected 00:32:00.697 [2024-07-25 15:27:52.635146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e12170 (9): Bad file descriptor 00:32:00.697 [2024-07-25 15:27:52.636147] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.697 [2024-07-25 15:27:52.636156] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:00.697 [2024-07-25 15:27:52.636162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.697 request: 00:32:00.697 { 00:32:00.697 "name": "nvme0", 00:32:00.697 "trtype": "tcp", 00:32:00.697 "traddr": "127.0.0.1", 00:32:00.697 "adrfam": "ipv4", 00:32:00.697 "trsvcid": "4420", 00:32:00.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.697 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:00.697 "prchk_reftag": false, 00:32:00.697 "prchk_guard": false, 00:32:00.697 "hdgst": false, 00:32:00.697 "ddgst": false, 00:32:00.697 "psk": "key1", 00:32:00.697 "method": "bdev_nvme_attach_controller", 00:32:00.697 "req_id": 1 00:32:00.697 } 00:32:00.697 Got JSON-RPC error response 00:32:00.697 response: 00:32:00.697 { 00:32:00.697 "code": -5, 00:32:00.697 "message": "Input/output error" 00:32:00.697 } 00:32:00.697 15:27:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:00.697 15:27:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:00.697 15:27:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:00.697 15:27:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:00.697 15:27:52 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.697 15:27:52 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:00.697 15:27:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.697 15:27:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.958 15:27:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:00.958 15:27:52 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:00.958 15:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:00.958 15:27:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:00.958 15:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:01.219 15:27:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:01.219 15:27:53 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:01.219 15:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.480 15:27:53 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:01.480 15:27:53 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.vuPBRctLxn 00:32:01.480 15:27:53 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vuPBRctLxn 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vuPBRctLxn 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vuPBRctLxn 00:32:01.480 15:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vuPBRctLxn 00:32:01.480 [2024-07-25 15:27:53.623951] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vuPBRctLxn': 0100660 00:32:01.480 [2024-07-25 15:27:53.623970] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:01.480 request: 00:32:01.480 { 00:32:01.480 "name": "key0", 00:32:01.480 "path": "/tmp/tmp.vuPBRctLxn", 00:32:01.480 "method": "keyring_file_add_key", 00:32:01.480 "req_id": 1 00:32:01.480 } 00:32:01.480 Got JSON-RPC error response 00:32:01.480 response: 00:32:01.480 { 00:32:01.480 "code": -1, 00:32:01.480 "message": "Operation not permitted" 00:32:01.480 } 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:01.480 15:27:53 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:01.480 15:27:53 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.vuPBRctLxn 00:32:01.480 15:27:53 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vuPBRctLxn 00:32:01.480 15:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vuPBRctLxn 00:32:01.740 15:27:53 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.vuPBRctLxn 00:32:01.740 15:27:53 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:01.740 15:27:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:01.740 15:27:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:01.740 15:27:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:01.740 15:27:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:01.740 15:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.000 15:27:53 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:02.000 15:27:53 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:02.000 15:27:53 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:02.000 15:27:53 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:02.000 15:27:53 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:02.000 15:27:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.000 15:27:53 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:02.000 15:27:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.000 15:27:53 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:02.000 15:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:02.000 [2024-07-25 15:27:54.105177] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vuPBRctLxn': No such file or directory 00:32:02.000 [2024-07-25 15:27:54.105195] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:02.000 [2024-07-25 15:27:54.105214] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:02.000 [2024-07-25 15:27:54.105219] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:02.000 [2024-07-25 15:27:54.105225] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:02.000 request: 00:32:02.000 { 00:32:02.000 "name": "nvme0", 00:32:02.000 "trtype": "tcp", 00:32:02.001 "traddr": "127.0.0.1", 00:32:02.001 "adrfam": "ipv4", 00:32:02.001 "trsvcid": "4420", 00:32:02.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:02.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:02.001 "prchk_reftag": false, 00:32:02.001 "prchk_guard": false, 00:32:02.001 "hdgst": false, 00:32:02.001 "ddgst": false, 00:32:02.001 "psk": "key0", 00:32:02.001 "method": "bdev_nvme_attach_controller", 00:32:02.001 "req_id": 1 00:32:02.001 } 00:32:02.001 Got JSON-RPC error response 00:32:02.001 response: 00:32:02.001 { 00:32:02.001 "code": -19, 00:32:02.001 "message": "No such device" 00:32:02.001 } 00:32:02.001 15:27:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:02.001 15:27:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:02.001 15:27:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:02.001 15:27:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:02.001 15:27:54 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:02.001 15:27:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:02.262 15:27:54 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XS39FVSGf4 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:02.262 15:27:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:02.262 15:27:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:02.262 15:27:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:02.262 15:27:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:02.262 15:27:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:02.262 15:27:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XS39FVSGf4 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XS39FVSGf4 00:32:02.262 15:27:54 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.XS39FVSGf4 00:32:02.262 15:27:54 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XS39FVSGf4 00:32:02.262 15:27:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XS39FVSGf4 00:32:02.522 15:27:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:02.522 15:27:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:02.522 nvme0n1 00:32:02.783 15:27:54 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:02.783 15:27:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:02.783 15:27:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.783 15:27:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.783 15:27:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:02.783 15:27:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.783 15:27:54 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:02.783 15:27:54 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:02.783 15:27:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:03.045 15:27:55 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:03.045 15:27:55 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:03.045 15:27:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.045 15:27:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.045 15:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.045 15:27:55 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:03.045 15:27:55 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:03.045 15:27:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:03.045 15:27:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.045 15:27:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.045 15:27:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.045 15:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.305 15:27:55 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:03.305 15:27:55 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:03.305 15:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:03.567 15:27:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:03.567 15:27:55 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:03.567 15:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.567 15:27:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:03.567 15:27:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XS39FVSGf4 00:32:03.567 15:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XS39FVSGf4 00:32:03.827 15:27:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QAOeyJENmk 00:32:03.828 15:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QAOeyJENmk 00:32:03.828 15:27:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:03.828 15:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.089 nvme0n1 00:32:04.089 15:27:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:04.089 15:27:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:04.350 15:27:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:04.350 "subsystems": [ 00:32:04.350 { 00:32:04.350 "subsystem": "keyring", 00:32:04.350 "config": [ 00:32:04.350 { 00:32:04.350 "method": "keyring_file_add_key", 00:32:04.350 "params": { 00:32:04.350 "name": "key0", 00:32:04.350 "path": "/tmp/tmp.XS39FVSGf4" 00:32:04.350 } 00:32:04.350 }, 00:32:04.350 { 00:32:04.350 "method": "keyring_file_add_key", 00:32:04.350 "params": { 00:32:04.350 "name": "key1", 00:32:04.350 "path": "/tmp/tmp.QAOeyJENmk" 00:32:04.350 } 00:32:04.350 } 00:32:04.350 ] 00:32:04.350 }, 00:32:04.350 { 00:32:04.350 "subsystem": "iobuf", 00:32:04.350 "config": [ 00:32:04.350 { 00:32:04.350 "method": "iobuf_set_options", 00:32:04.350 "params": { 00:32:04.350 "small_pool_count": 8192, 00:32:04.350 "large_pool_count": 1024, 00:32:04.350 "small_bufsize": 8192, 00:32:04.350 "large_bufsize": 135168 00:32:04.350 } 00:32:04.350 } 00:32:04.350 ] 00:32:04.350 }, 00:32:04.350 { 00:32:04.350 "subsystem": "sock", 00:32:04.350 "config": [ 00:32:04.350 { 00:32:04.350 "method": "sock_set_default_impl", 00:32:04.350 "params": { 00:32:04.350 "impl_name": "posix" 00:32:04.350 } 00:32:04.350 }, 00:32:04.350 { 00:32:04.350 "method": "sock_impl_set_options", 00:32:04.350 "params": { 00:32:04.350 "impl_name": "ssl", 00:32:04.350 "recv_buf_size": 4096, 00:32:04.350 "send_buf_size": 4096, 00:32:04.350 "enable_recv_pipe": true, 00:32:04.350 "enable_quickack": false, 00:32:04.350 "enable_placement_id": 0, 00:32:04.350 "enable_zerocopy_send_server": true, 00:32:04.350 "enable_zerocopy_send_client": false, 00:32:04.350 "zerocopy_threshold": 0, 00:32:04.350 "tls_version": 0, 00:32:04.350 "enable_ktls": false 00:32:04.350 } 00:32:04.350 }, 00:32:04.350 { 00:32:04.350 "method": "sock_impl_set_options", 00:32:04.350 "params": { 00:32:04.350 "impl_name": "posix", 00:32:04.350 "recv_buf_size": 2097152, 00:32:04.350 "send_buf_size": 2097152, 00:32:04.350 "enable_recv_pipe": true, 00:32:04.350 "enable_quickack": false, 00:32:04.350 "enable_placement_id": 0, 00:32:04.350 "enable_zerocopy_send_server": true, 00:32:04.350 "enable_zerocopy_send_client": false, 00:32:04.350 "zerocopy_threshold": 0, 00:32:04.350 "tls_version": 0, 00:32:04.350 "enable_ktls": false 00:32:04.350 } 00:32:04.350 } 00:32:04.350 ] 00:32:04.350 }, 00:32:04.350 { 00:32:04.350 "subsystem": "vmd", 00:32:04.350 "config": [] 00:32:04.350 }, 00:32:04.350 { 00:32:04.350 "subsystem": "accel", 00:32:04.350 "config": [ 00:32:04.350 { 00:32:04.350 "method": "accel_set_options", 00:32:04.350 "params": { 00:32:04.350 "small_cache_size": 128, 00:32:04.350 "large_cache_size": 16, 00:32:04.351 "task_count": 2048, 00:32:04.351 "sequence_count": 2048, 00:32:04.351 "buf_count": 2048 00:32:04.351 } 00:32:04.351 } 00:32:04.351 ] 00:32:04.351 }, 00:32:04.351 { 00:32:04.351 "subsystem": "bdev", 00:32:04.351 "config": [ 00:32:04.351 { 00:32:04.351 "method": "bdev_set_options", 00:32:04.351 "params": { 00:32:04.351 "bdev_io_pool_size": 65535, 00:32:04.351 "bdev_io_cache_size": 256, 00:32:04.351 "bdev_auto_examine": true, 00:32:04.351 "iobuf_small_cache_size": 128, 00:32:04.351 "iobuf_large_cache_size": 16 00:32:04.351 } 00:32:04.351 }, 00:32:04.351 { 00:32:04.351 "method": "bdev_raid_set_options", 00:32:04.351 "params": { 00:32:04.351 "process_window_size_kb": 1024, 00:32:04.351 "process_max_bandwidth_mb_sec": 0 00:32:04.351 } 00:32:04.351 }, 00:32:04.351 { 00:32:04.351 "method": "bdev_iscsi_set_options", 00:32:04.351 "params": { 00:32:04.351 "timeout_sec": 30 00:32:04.351 } 00:32:04.351 }, 00:32:04.351 { 00:32:04.351 "method": "bdev_nvme_set_options", 00:32:04.351 "params": { 00:32:04.351 "action_on_timeout": "none", 00:32:04.351 "timeout_us": 0, 00:32:04.351 "timeout_admin_us": 0, 00:32:04.351 "keep_alive_timeout_ms": 10000, 00:32:04.351 "arbitration_burst": 0, 00:32:04.351 "low_priority_weight": 0, 00:32:04.351 "medium_priority_weight": 0, 00:32:04.351 "high_priority_weight": 0, 00:32:04.351 "nvme_adminq_poll_period_us": 10000, 00:32:04.351 "nvme_ioq_poll_period_us": 0, 00:32:04.351 "io_queue_requests": 512, 00:32:04.351 "delay_cmd_submit": true, 00:32:04.351 "transport_retry_count": 4, 00:32:04.351 "bdev_retry_count": 3, 00:32:04.351 "transport_ack_timeout": 0, 00:32:04.351 "ctrlr_loss_timeout_sec": 0, 00:32:04.351 "reconnect_delay_sec": 0, 00:32:04.351 "fast_io_fail_timeout_sec": 0, 00:32:04.351 "disable_auto_failback": false, 00:32:04.351 "generate_uuids": false, 00:32:04.351 "transport_tos": 0, 00:32:04.351 "nvme_error_stat": false, 00:32:04.351 "rdma_srq_size": 0, 00:32:04.351 "io_path_stat": false, 00:32:04.351 "allow_accel_sequence": false, 00:32:04.351 "rdma_max_cq_size": 0, 00:32:04.351 "rdma_cm_event_timeout_ms": 0, 00:32:04.351 "dhchap_digests": [ 00:32:04.351 "sha256", 00:32:04.351 "sha384", 00:32:04.351 "sha512" 00:32:04.351 ], 00:32:04.351 "dhchap_dhgroups": [ 00:32:04.351 "null", 00:32:04.351 "ffdhe2048", 00:32:04.351 "ffdhe3072", 00:32:04.351 "ffdhe4096", 00:32:04.351 "ffdhe6144", 00:32:04.351 "ffdhe8192" 00:32:04.351 ] 00:32:04.351 } 00:32:04.351 }, 00:32:04.351 { 00:32:04.351 "method": "bdev_nvme_attach_controller", 00:32:04.351 "params": { 00:32:04.351 "name": "nvme0", 00:32:04.351 "trtype": "TCP", 00:32:04.351 "adrfam": "IPv4", 00:32:04.351 "traddr": "127.0.0.1", 00:32:04.351 "trsvcid": "4420", 00:32:04.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.351 "prchk_reftag": false, 00:32:04.351 "prchk_guard": false, 00:32:04.351 "ctrlr_loss_timeout_sec": 0, 00:32:04.351 "reconnect_delay_sec": 0, 00:32:04.351 "fast_io_fail_timeout_sec": 0, 00:32:04.351 "psk": "key0", 00:32:04.351 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.351 "hdgst": false, 00:32:04.351 "ddgst": false 00:32:04.351 } 00:32:04.351 }, 00:32:04.351 { 00:32:04.351 "method": "bdev_nvme_set_hotplug", 00:32:04.351 "params": { 00:32:04.351 "period_us": 100000, 00:32:04.351 "enable": false 00:32:04.351 } 00:32:04.351 }, 00:32:04.351 { 00:32:04.351 "method": "bdev_wait_for_examine" 00:32:04.351 } 00:32:04.351 ] 00:32:04.351 }, 00:32:04.351 { 00:32:04.351 "subsystem": "nbd", 00:32:04.351 "config": [] 00:32:04.351 } 00:32:04.351 ] 00:32:04.351 }' 00:32:04.351 15:27:56 keyring_file -- keyring/file.sh@114 -- # killprocess 481862 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 481862 ']' 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@954 -- # kill -0 481862 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 481862 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 481862' 00:32:04.351 killing process with pid 481862 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@969 -- # kill 481862 00:32:04.351 Received shutdown signal, test time was about 1.000000 seconds 00:32:04.351 00:32:04.351 Latency(us) 00:32:04.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.351 =================================================================================================================== 00:32:04.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:04.351 15:27:56 keyring_file -- common/autotest_common.sh@974 -- # wait 481862 00:32:04.637 15:27:56 keyring_file -- keyring/file.sh@117 -- # bperfpid=483381 00:32:04.637 15:27:56 keyring_file -- keyring/file.sh@119 -- # waitforlisten 483381 /var/tmp/bperf.sock 00:32:04.637 15:27:56 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 483381 ']' 00:32:04.637 15:27:56 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:04.637 15:27:56 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:04.637 15:27:56 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:04.637 15:27:56 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:04.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:04.637 15:27:56 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:04.637 15:27:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:04.637 15:27:56 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:04.637 "subsystems": [ 00:32:04.637 { 00:32:04.637 "subsystem": "keyring", 00:32:04.637 "config": [ 00:32:04.637 { 00:32:04.637 "method": "keyring_file_add_key", 00:32:04.637 "params": { 00:32:04.637 "name": "key0", 00:32:04.637 "path": "/tmp/tmp.XS39FVSGf4" 00:32:04.637 } 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "method": "keyring_file_add_key", 00:32:04.637 "params": { 00:32:04.637 "name": "key1", 00:32:04.637 "path": "/tmp/tmp.QAOeyJENmk" 00:32:04.637 } 00:32:04.637 } 00:32:04.637 ] 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "subsystem": "iobuf", 00:32:04.637 "config": [ 00:32:04.637 { 00:32:04.637 "method": "iobuf_set_options", 00:32:04.637 "params": { 00:32:04.637 "small_pool_count": 8192, 00:32:04.637 "large_pool_count": 1024, 00:32:04.637 "small_bufsize": 8192, 00:32:04.637 "large_bufsize": 135168 00:32:04.637 } 00:32:04.637 } 00:32:04.637 ] 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "subsystem": "sock", 00:32:04.637 "config": [ 00:32:04.637 { 00:32:04.637 "method": "sock_set_default_impl", 00:32:04.637 "params": { 00:32:04.637 "impl_name": "posix" 00:32:04.637 } 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "method": "sock_impl_set_options", 00:32:04.637 "params": { 00:32:04.637 "impl_name": "ssl", 00:32:04.637 "recv_buf_size": 4096, 00:32:04.637 "send_buf_size": 4096, 00:32:04.637 "enable_recv_pipe": true, 00:32:04.637 "enable_quickack": false, 00:32:04.637 "enable_placement_id": 0, 00:32:04.637 "enable_zerocopy_send_server": true, 00:32:04.637 "enable_zerocopy_send_client": false, 00:32:04.637 "zerocopy_threshold": 0, 00:32:04.637 "tls_version": 0, 00:32:04.637 "enable_ktls": false 00:32:04.637 } 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "method": "sock_impl_set_options", 00:32:04.637 "params": { 00:32:04.637 "impl_name": "posix", 00:32:04.637 "recv_buf_size": 2097152, 00:32:04.637 "send_buf_size": 2097152, 00:32:04.637 "enable_recv_pipe": true, 00:32:04.637 "enable_quickack": false, 00:32:04.637 "enable_placement_id": 0, 00:32:04.637 "enable_zerocopy_send_server": true, 00:32:04.637 "enable_zerocopy_send_client": false, 00:32:04.637 "zerocopy_threshold": 0, 00:32:04.637 "tls_version": 0, 00:32:04.637 "enable_ktls": false 00:32:04.637 } 00:32:04.637 } 00:32:04.637 ] 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "subsystem": "vmd", 00:32:04.637 "config": [] 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "subsystem": "accel", 00:32:04.637 "config": [ 00:32:04.637 { 00:32:04.637 "method": "accel_set_options", 00:32:04.637 "params": { 00:32:04.637 "small_cache_size": 128, 00:32:04.637 "large_cache_size": 16, 00:32:04.637 "task_count": 2048, 00:32:04.637 "sequence_count": 2048, 00:32:04.637 "buf_count": 2048 00:32:04.637 } 00:32:04.637 } 00:32:04.637 ] 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "subsystem": "bdev", 00:32:04.637 "config": [ 00:32:04.637 { 00:32:04.637 "method": "bdev_set_options", 00:32:04.637 "params": { 00:32:04.637 "bdev_io_pool_size": 65535, 00:32:04.637 "bdev_io_cache_size": 256, 00:32:04.637 "bdev_auto_examine": true, 00:32:04.637 "iobuf_small_cache_size": 128, 00:32:04.637 "iobuf_large_cache_size": 16 00:32:04.637 } 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "method": "bdev_raid_set_options", 00:32:04.637 "params": { 00:32:04.637 "process_window_size_kb": 1024, 00:32:04.637 "process_max_bandwidth_mb_sec": 0 00:32:04.637 } 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "method": "bdev_iscsi_set_options", 00:32:04.637 "params": { 00:32:04.637 "timeout_sec": 30 00:32:04.637 } 00:32:04.637 }, 00:32:04.637 { 00:32:04.637 "method": "bdev_nvme_set_options", 00:32:04.637 "params": { 00:32:04.637 "action_on_timeout": "none", 00:32:04.637 "timeout_us": 0, 00:32:04.637 "timeout_admin_us": 0, 00:32:04.637 "keep_alive_timeout_ms": 10000, 00:32:04.637 "arbitration_burst": 0, 00:32:04.637 "low_priority_weight": 0, 00:32:04.637 "medium_priority_weight": 0, 00:32:04.637 "high_priority_weight": 0, 00:32:04.637 "nvme_adminq_poll_period_us": 10000, 00:32:04.637 "nvme_ioq_poll_period_us": 0, 00:32:04.637 "io_queue_requests": 512, 00:32:04.637 "delay_cmd_submit": true, 00:32:04.637 "transport_retry_count": 4, 00:32:04.637 "bdev_retry_count": 3, 00:32:04.637 "transport_ack_timeout": 0, 00:32:04.637 "ctrlr_loss_timeout_sec": 0, 00:32:04.637 "reconnect_delay_sec": 0, 00:32:04.637 "fast_io_fail_timeout_sec": 0, 00:32:04.637 "disable_auto_failback": false, 00:32:04.637 "generate_uuids": false, 00:32:04.637 "transport_tos": 0, 00:32:04.637 "nvme_error_stat": false, 00:32:04.637 "rdma_srq_size": 0, 00:32:04.637 "io_path_stat": false, 00:32:04.637 "allow_accel_sequence": false, 00:32:04.637 "rdma_max_cq_size": 0, 00:32:04.637 "rdma_cm_event_timeout_ms": 0, 00:32:04.637 "dhchap_digests": [ 00:32:04.637 "sha256", 00:32:04.638 "sha384", 00:32:04.638 "sha512" 00:32:04.638 ], 00:32:04.638 "dhchap_dhgroups": [ 00:32:04.638 "null", 00:32:04.638 "ffdhe2048", 00:32:04.638 "ffdhe3072", 00:32:04.638 "ffdhe4096", 00:32:04.638 "ffdhe6144", 00:32:04.638 "ffdhe8192" 00:32:04.638 ] 00:32:04.638 } 00:32:04.638 }, 00:32:04.638 { 00:32:04.638 "method": "bdev_nvme_attach_controller", 00:32:04.638 "params": { 00:32:04.638 "name": "nvme0", 00:32:04.638 "trtype": "TCP", 00:32:04.638 "adrfam": "IPv4", 00:32:04.638 "traddr": "127.0.0.1", 00:32:04.638 "trsvcid": "4420", 00:32:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.638 "prchk_reftag": false, 00:32:04.638 "prchk_guard": false, 00:32:04.638 "ctrlr_loss_timeout_sec": 0, 00:32:04.638 "reconnect_delay_sec": 0, 00:32:04.638 "fast_io_fail_timeout_sec": 0, 00:32:04.638 "psk": "key0", 00:32:04.638 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.638 "hdgst": false, 00:32:04.638 "ddgst": false 00:32:04.638 } 00:32:04.638 }, 00:32:04.638 { 00:32:04.638 "method": "bdev_nvme_set_hotplug", 00:32:04.638 "params": { 00:32:04.638 "period_us": 100000, 00:32:04.638 "enable": false 00:32:04.638 } 00:32:04.638 }, 00:32:04.638 { 00:32:04.638 "method": "bdev_wait_for_examine" 00:32:04.638 } 00:32:04.638 ] 00:32:04.638 }, 00:32:04.638 { 00:32:04.638 "subsystem": "nbd", 00:32:04.638 "config": [] 00:32:04.638 } 00:32:04.638 ] 00:32:04.638 }' 00:32:04.638 [2024-07-25 15:27:56.656875] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:04.638 [2024-07-25 15:27:56.656932] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483381 ] 00:32:04.638 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.638 [2024-07-25 15:27:56.731207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.638 [2024-07-25 15:27:56.784963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.907 [2024-07-25 15:27:56.926474] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:05.479 15:27:57 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:05.479 15:27:57 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:05.479 15:27:57 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:05.479 15:27:57 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:05.479 15:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.479 15:27:57 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:05.479 15:27:57 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:05.479 15:27:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:05.479 15:27:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.479 15:27:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.479 15:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.479 15:27:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.740 15:27:57 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:05.741 15:27:57 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:05.741 15:27:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:05.741 15:27:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.741 15:27:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.741 15:27:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:05.741 15:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.741 15:27:57 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:05.741 15:27:57 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:05.741 15:27:57 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:05.741 15:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:06.002 15:27:58 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:06.002 15:27:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:06.002 15:27:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XS39FVSGf4 /tmp/tmp.QAOeyJENmk 00:32:06.002 15:27:58 keyring_file -- keyring/file.sh@20 -- # killprocess 483381 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 483381 ']' 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@954 -- # kill -0 483381 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483381 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483381' 00:32:06.002 killing process with pid 483381 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@969 -- # kill 483381 00:32:06.002 Received shutdown signal, test time was about 1.000000 seconds 00:32:06.002 00:32:06.002 Latency(us) 00:32:06.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.002 =================================================================================================================== 00:32:06.002 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:06.002 15:27:58 keyring_file -- common/autotest_common.sh@974 -- # wait 483381 00:32:06.263 15:27:58 keyring_file -- keyring/file.sh@21 -- # killprocess 481559 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 481559 ']' 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@954 -- # kill -0 481559 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 481559 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 481559' 00:32:06.263 killing process with pid 481559 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@969 -- # kill 481559 00:32:06.263 [2024-07-25 15:27:58.285607] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:06.263 15:27:58 keyring_file -- common/autotest_common.sh@974 -- # wait 481559 00:32:06.525 00:32:06.525 real 0m11.137s 00:32:06.525 user 0m26.590s 00:32:06.525 sys 0m2.376s 00:32:06.525 15:27:58 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:06.525 15:27:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:06.525 ************************************ 00:32:06.525 END TEST keyring_file 00:32:06.525 ************************************ 00:32:06.525 15:27:58 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:32:06.525 15:27:58 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:06.525 15:27:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:06.525 15:27:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:06.525 15:27:58 -- common/autotest_common.sh@10 -- # set +x 00:32:06.525 ************************************ 00:32:06.525 START TEST keyring_linux 00:32:06.525 ************************************ 00:32:06.525 15:27:58 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:06.525 * Looking for test storage... 00:32:06.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:06.525 15:27:58 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:06.525 15:27:58 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.525 15:27:58 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:06.525 15:27:58 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.525 15:27:58 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.525 15:27:58 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.526 15:27:58 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.526 15:27:58 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.526 15:27:58 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.526 15:27:58 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.526 15:27:58 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.526 15:27:58 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.526 15:27:58 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:06.526 15:27:58 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:06.526 15:27:58 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:06.526 15:27:58 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:06.526 15:27:58 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:06.526 15:27:58 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:06.526 15:27:58 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:06.526 15:27:58 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:06.526 15:27:58 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:06.526 15:27:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:06.526 15:27:58 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:06.526 15:27:58 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:06.526 15:27:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:06.526 15:27:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:06.526 15:27:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:06.526 15:27:58 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:06.788 /tmp/:spdk-test:key0 00:32:06.788 15:27:58 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:06.788 15:27:58 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:06.788 15:27:58 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.788 15:27:58 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:06.788 15:27:58 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:06.788 15:27:58 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:06.788 15:27:58 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:06.788 15:27:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:06.788 /tmp/:spdk-test:key1 00:32:06.788 15:27:58 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=483867 00:32:06.788 15:27:58 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 483867 00:32:06.788 15:27:58 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:06.788 15:27:58 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 483867 ']' 00:32:06.788 15:27:58 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.788 15:27:58 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.788 15:27:58 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.788 15:27:58 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.788 15:27:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:06.788 [2024-07-25 15:27:58.859409] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:06.788 [2024-07-25 15:27:58.859481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483867 ] 00:32:06.788 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.788 [2024-07-25 15:27:58.925192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.050 [2024-07-25 15:27:59.000150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:07.623 15:27:59 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:07.623 [2024-07-25 15:27:59.649397] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.623 null0 00:32:07.623 [2024-07-25 15:27:59.681453] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:07.623 [2024-07-25 15:27:59.681855] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.623 15:27:59 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:07.623 769560114 00:32:07.623 15:27:59 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:07.623 180591823 00:32:07.623 15:27:59 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=484143 00:32:07.623 15:27:59 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 484143 /var/tmp/bperf.sock 00:32:07.623 15:27:59 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 484143 ']' 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:07.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:07.623 15:27:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:07.623 [2024-07-25 15:27:59.757246] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:07.623 [2024-07-25 15:27:59.757294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484143 ] 00:32:07.623 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.884 [2024-07-25 15:27:59.829850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.884 [2024-07-25 15:27:59.883479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.457 15:28:00 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.457 15:28:00 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:08.457 15:28:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:08.457 15:28:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:08.719 15:28:00 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:08.719 15:28:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:08.719 15:28:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:08.719 15:28:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:08.981 [2024-07-25 15:28:01.014017] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:08.981 nvme0n1 00:32:08.981 15:28:01 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:08.981 15:28:01 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:08.981 15:28:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:08.981 15:28:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:08.981 15:28:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:08.981 15:28:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.242 15:28:01 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:09.242 15:28:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:09.242 15:28:01 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:09.242 15:28:01 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:09.242 15:28:01 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:09.242 15:28:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.242 15:28:01 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:09.502 15:28:01 keyring_linux -- keyring/linux.sh@25 -- # sn=769560114 00:32:09.502 15:28:01 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:09.502 15:28:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:09.502 15:28:01 keyring_linux -- keyring/linux.sh@26 -- # [[ 769560114 == \7\6\9\5\6\0\1\1\4 ]] 00:32:09.502 15:28:01 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 769560114 00:32:09.502 15:28:01 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:09.502 15:28:01 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:09.502 Running I/O for 1 seconds... 00:32:10.447 00:32:10.447 Latency(us) 00:32:10.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.447 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:10.447 nvme0n1 : 1.02 6520.04 25.47 0.00 0.00 19444.39 9065.81 29491.20 00:32:10.447 =================================================================================================================== 00:32:10.447 Total : 6520.04 25.47 0.00 0.00 19444.39 9065.81 29491.20 00:32:10.447 0 00:32:10.447 15:28:02 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:10.447 15:28:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:10.708 15:28:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:10.708 15:28:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:10.708 15:28:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:10.708 15:28:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:10.708 15:28:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:10.708 15:28:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:10.708 15:28:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:10.708 15:28:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:10.708 15:28:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:10.708 15:28:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:10.708 15:28:02 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:10.708 15:28:02 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:10.708 15:28:02 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:10.708 15:28:02 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:10.708 15:28:02 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:10.708 15:28:02 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:10.708 15:28:02 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:10.970 15:28:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:10.970 [2024-07-25 15:28:03.044307] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:10.970 [2024-07-25 15:28:03.044609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173a0f0 (107): Transport endpoint is not connected 00:32:10.970 [2024-07-25 15:28:03.045605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173a0f0 (9): Bad file descriptor 00:32:10.970 [2024-07-25 15:28:03.046606] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:10.970 [2024-07-25 15:28:03.046614] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:10.970 [2024-07-25 15:28:03.046620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:10.970 request: 00:32:10.970 { 00:32:10.970 "name": "nvme0", 00:32:10.970 "trtype": "tcp", 00:32:10.970 "traddr": "127.0.0.1", 00:32:10.970 "adrfam": "ipv4", 00:32:10.970 "trsvcid": "4420", 00:32:10.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:10.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:10.970 "prchk_reftag": false, 00:32:10.970 "prchk_guard": false, 00:32:10.970 "hdgst": false, 00:32:10.970 "ddgst": false, 00:32:10.970 "psk": ":spdk-test:key1", 00:32:10.970 "method": "bdev_nvme_attach_controller", 00:32:10.970 "req_id": 1 00:32:10.970 } 00:32:10.970 Got JSON-RPC error response 00:32:10.970 response: 00:32:10.970 { 00:32:10.970 "code": -5, 00:32:10.970 "message": "Input/output error" 00:32:10.970 } 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@33 -- # sn=769560114 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 769560114 00:32:10.970 1 links removed 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@33 -- # sn=180591823 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 180591823 00:32:10.970 1 links removed 00:32:10.970 15:28:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 484143 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 484143 ']' 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 484143 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 484143 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 484143' 00:32:10.970 killing process with pid 484143 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@969 -- # kill 484143 00:32:10.970 Received shutdown signal, test time was about 1.000000 seconds 00:32:10.970 00:32:10.970 Latency(us) 00:32:10.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.970 =================================================================================================================== 00:32:10.970 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:10.970 15:28:03 keyring_linux -- common/autotest_common.sh@974 -- # wait 484143 00:32:11.232 15:28:03 keyring_linux -- keyring/linux.sh@42 -- # killprocess 483867 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 483867 ']' 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 483867 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483867 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483867' 00:32:11.232 killing process with pid 483867 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@969 -- # kill 483867 00:32:11.232 15:28:03 keyring_linux -- common/autotest_common.sh@974 -- # wait 483867 00:32:11.492 00:32:11.492 real 0m4.936s 00:32:11.492 user 0m8.612s 00:32:11.492 sys 0m1.174s 00:32:11.492 15:28:03 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:11.492 15:28:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:11.492 ************************************ 00:32:11.492 END TEST keyring_linux 00:32:11.492 ************************************ 00:32:11.492 15:28:03 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:11.492 15:28:03 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:11.492 15:28:03 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:11.492 15:28:03 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:11.492 15:28:03 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:11.492 15:28:03 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:11.492 15:28:03 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:11.492 15:28:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:11.492 15:28:03 -- common/autotest_common.sh@10 -- # set +x 00:32:11.492 15:28:03 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:11.492 15:28:03 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:11.492 15:28:03 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:11.492 15:28:03 -- common/autotest_common.sh@10 -- # set +x 00:32:19.639 INFO: APP EXITING 00:32:19.639 INFO: killing all VMs 00:32:19.639 INFO: killing vhost app 00:32:19.639 WARN: no vhost pid file found 00:32:19.639 INFO: EXIT DONE 00:32:22.945 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:22.945 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:22.945 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:27.155 Cleaning 00:32:27.155 Removing: /var/run/dpdk/spdk0/config 00:32:27.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:27.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:27.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:27.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:27.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:27.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:27.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:27.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:27.155 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:27.155 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:27.155 Removing: /var/run/dpdk/spdk1/config 00:32:27.155 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:27.155 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:27.155 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:27.155 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:27.155 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:27.155 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:27.155 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:27.155 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:27.155 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:27.155 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:27.155 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:27.155 Removing: /var/run/dpdk/spdk2/config 00:32:27.155 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:27.155 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:27.155 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:27.155 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:27.155 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:27.155 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:27.155 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:27.155 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:27.155 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:27.155 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:27.155 Removing: /var/run/dpdk/spdk3/config 00:32:27.155 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:27.155 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:27.155 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:27.155 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:27.155 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:27.155 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:27.155 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:27.155 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:27.155 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:27.155 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:27.155 Removing: /var/run/dpdk/spdk4/config 00:32:27.155 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:27.155 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:27.155 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:27.155 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:27.155 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:27.155 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:27.155 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:27.155 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:27.155 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:27.155 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:27.155 Removing: /dev/shm/bdev_svc_trace.1 00:32:27.155 Removing: /dev/shm/nvmf_trace.0 00:32:27.155 Removing: /dev/shm/spdk_tgt_trace.pid33651 00:32:27.155 Removing: /var/run/dpdk/spdk0 00:32:27.155 Removing: /var/run/dpdk/spdk1 00:32:27.155 Removing: /var/run/dpdk/spdk2 00:32:27.155 Removing: /var/run/dpdk/spdk3 00:32:27.155 Removing: /var/run/dpdk/spdk4 00:32:27.155 Removing: /var/run/dpdk/spdk_pid105387 00:32:27.155 Removing: /var/run/dpdk/spdk_pid116090 00:32:27.155 Removing: /var/run/dpdk/spdk_pid118442 00:32:27.155 Removing: /var/run/dpdk/spdk_pid119887 00:32:27.155 Removing: /var/run/dpdk/spdk_pid140237 00:32:27.155 Removing: /var/run/dpdk/spdk_pid145023 00:32:27.155 Removing: /var/run/dpdk/spdk_pid198146 00:32:27.155 Removing: /var/run/dpdk/spdk_pid204247 00:32:27.155 Removing: /var/run/dpdk/spdk_pid211586 00:32:27.155 Removing: /var/run/dpdk/spdk_pid218880 00:32:27.155 Removing: /var/run/dpdk/spdk_pid218936 00:32:27.155 Removing: /var/run/dpdk/spdk_pid219936 00:32:27.155 Removing: /var/run/dpdk/spdk_pid220938 00:32:27.155 Removing: /var/run/dpdk/spdk_pid221952 00:32:27.155 Removing: /var/run/dpdk/spdk_pid222622 00:32:27.155 Removing: /var/run/dpdk/spdk_pid222624 00:32:27.155 Removing: /var/run/dpdk/spdk_pid222965 00:32:27.155 Removing: /var/run/dpdk/spdk_pid222971 00:32:27.155 Removing: /var/run/dpdk/spdk_pid222973 00:32:27.155 Removing: /var/run/dpdk/spdk_pid223985 00:32:27.155 Removing: /var/run/dpdk/spdk_pid225092 00:32:27.155 Removing: /var/run/dpdk/spdk_pid226443 00:32:27.155 Removing: /var/run/dpdk/spdk_pid227234 00:32:27.155 Removing: /var/run/dpdk/spdk_pid227343 00:32:27.155 Removing: /var/run/dpdk/spdk_pid227593 00:32:27.155 Removing: /var/run/dpdk/spdk_pid229000 00:32:27.155 Removing: /var/run/dpdk/spdk_pid230394 00:32:27.155 Removing: /var/run/dpdk/spdk_pid240379 00:32:27.155 Removing: /var/run/dpdk/spdk_pid272396 00:32:27.155 Removing: /var/run/dpdk/spdk_pid277536 00:32:27.155 Removing: /var/run/dpdk/spdk_pid279480 00:32:27.155 Removing: /var/run/dpdk/spdk_pid281828 00:32:27.155 Removing: /var/run/dpdk/spdk_pid282012 00:32:27.155 Removing: /var/run/dpdk/spdk_pid282187 00:32:27.155 Removing: /var/run/dpdk/spdk_pid282523 00:32:27.155 Removing: /var/run/dpdk/spdk_pid283226 00:32:27.155 Removing: /var/run/dpdk/spdk_pid285265 00:32:27.156 Removing: /var/run/dpdk/spdk_pid286335 00:32:27.156 Removing: /var/run/dpdk/spdk_pid286872 00:32:27.156 Removing: /var/run/dpdk/spdk_pid289416 00:32:27.156 Removing: /var/run/dpdk/spdk_pid290139 00:32:27.156 Removing: /var/run/dpdk/spdk_pid291093 00:32:27.156 Removing: /var/run/dpdk/spdk_pid295888 00:32:27.156 Removing: /var/run/dpdk/spdk_pid307995 00:32:27.156 Removing: /var/run/dpdk/spdk_pid312734 00:32:27.156 Removing: /var/run/dpdk/spdk_pid320395 00:32:27.156 Removing: /var/run/dpdk/spdk_pid32121 00:32:27.156 Removing: /var/run/dpdk/spdk_pid322015 00:32:27.156 Removing: /var/run/dpdk/spdk_pid323738 00:32:27.156 Removing: /var/run/dpdk/spdk_pid328990 00:32:27.156 Removing: /var/run/dpdk/spdk_pid333886 00:32:27.156 Removing: /var/run/dpdk/spdk_pid33651 00:32:27.156 Removing: /var/run/dpdk/spdk_pid34189 00:32:27.156 Removing: /var/run/dpdk/spdk_pid342830 00:32:27.156 Removing: /var/run/dpdk/spdk_pid342946 00:32:27.156 Removing: /var/run/dpdk/spdk_pid347733 00:32:27.156 Removing: /var/run/dpdk/spdk_pid348008 00:32:27.156 Removing: /var/run/dpdk/spdk_pid348334 00:32:27.156 Removing: /var/run/dpdk/spdk_pid348772 00:32:27.156 Removing: /var/run/dpdk/spdk_pid348877 00:32:27.156 Removing: /var/run/dpdk/spdk_pid35360 00:32:27.156 Removing: /var/run/dpdk/spdk_pid354314 00:32:27.156 Removing: /var/run/dpdk/spdk_pid354882 00:32:27.156 Removing: /var/run/dpdk/spdk_pid35556 00:32:27.156 Removing: /var/run/dpdk/spdk_pid360188 00:32:27.156 Removing: /var/run/dpdk/spdk_pid363400 00:32:27.156 Removing: /var/run/dpdk/spdk_pid36758 00:32:27.156 Removing: /var/run/dpdk/spdk_pid36954 00:32:27.156 Removing: /var/run/dpdk/spdk_pid370338 00:32:27.156 Removing: /var/run/dpdk/spdk_pid37243 00:32:27.156 Removing: /var/run/dpdk/spdk_pid376889 00:32:27.156 Removing: /var/run/dpdk/spdk_pid38207 00:32:27.156 Removing: /var/run/dpdk/spdk_pid386719 00:32:27.156 Removing: /var/run/dpdk/spdk_pid38986 00:32:27.156 Removing: /var/run/dpdk/spdk_pid39345 00:32:27.156 Removing: /var/run/dpdk/spdk_pid395118 00:32:27.156 Removing: /var/run/dpdk/spdk_pid395120 00:32:27.156 Removing: /var/run/dpdk/spdk_pid39617 00:32:27.156 Removing: /var/run/dpdk/spdk_pid39905 00:32:27.156 Removing: /var/run/dpdk/spdk_pid40237 00:32:27.156 Removing: /var/run/dpdk/spdk_pid40594 00:32:27.156 Removing: /var/run/dpdk/spdk_pid40942 00:32:27.156 Removing: /var/run/dpdk/spdk_pid41196 00:32:27.156 Removing: /var/run/dpdk/spdk_pid417363 00:32:27.156 Removing: /var/run/dpdk/spdk_pid418133 00:32:27.156 Removing: /var/run/dpdk/spdk_pid418898 00:32:27.156 Removing: /var/run/dpdk/spdk_pid419686 00:32:27.156 Removing: /var/run/dpdk/spdk_pid420492 00:32:27.156 Removing: /var/run/dpdk/spdk_pid421174 00:32:27.156 Removing: /var/run/dpdk/spdk_pid422338 00:32:27.156 Removing: /var/run/dpdk/spdk_pid423091 00:32:27.156 Removing: /var/run/dpdk/spdk_pid42386 00:32:27.156 Removing: /var/run/dpdk/spdk_pid428174 00:32:27.417 Removing: /var/run/dpdk/spdk_pid428419 00:32:27.417 Removing: /var/run/dpdk/spdk_pid435643 00:32:27.417 Removing: /var/run/dpdk/spdk_pid435789 00:32:27.417 Removing: /var/run/dpdk/spdk_pid438478 00:32:27.417 Removing: /var/run/dpdk/spdk_pid445723 00:32:27.417 Removing: /var/run/dpdk/spdk_pid445734 00:32:27.417 Removing: /var/run/dpdk/spdk_pid451532 00:32:27.417 Removing: /var/run/dpdk/spdk_pid453799 00:32:27.417 Removing: /var/run/dpdk/spdk_pid456138 00:32:27.417 Removing: /var/run/dpdk/spdk_pid45649 00:32:27.417 Removing: /var/run/dpdk/spdk_pid457506 00:32:27.417 Removing: /var/run/dpdk/spdk_pid459861 00:32:27.417 Removing: /var/run/dpdk/spdk_pid46009 00:32:27.417 Removing: /var/run/dpdk/spdk_pid461231 00:32:27.417 Removing: /var/run/dpdk/spdk_pid46371 00:32:27.417 Removing: /var/run/dpdk/spdk_pid46505 00:32:27.417 Removing: /var/run/dpdk/spdk_pid46974 00:32:27.417 Removing: /var/run/dpdk/spdk_pid47097 00:32:27.417 Removing: /var/run/dpdk/spdk_pid471303 00:32:27.417 Removing: /var/run/dpdk/spdk_pid472398 00:32:27.417 Removing: /var/run/dpdk/spdk_pid473062 00:32:27.417 Removing: /var/run/dpdk/spdk_pid47498 00:32:27.417 Removing: /var/run/dpdk/spdk_pid475799 00:32:27.417 Removing: /var/run/dpdk/spdk_pid476356 00:32:27.417 Removing: /var/run/dpdk/spdk_pid477024 00:32:27.417 Removing: /var/run/dpdk/spdk_pid47801 00:32:27.417 Removing: /var/run/dpdk/spdk_pid48111 00:32:27.417 Removing: /var/run/dpdk/spdk_pid481559 00:32:27.417 Removing: /var/run/dpdk/spdk_pid48181 00:32:27.417 Removing: /var/run/dpdk/spdk_pid481862 00:32:27.417 Removing: /var/run/dpdk/spdk_pid483381 00:32:27.417 Removing: /var/run/dpdk/spdk_pid483867 00:32:27.417 Removing: /var/run/dpdk/spdk_pid484143 00:32:27.417 Removing: /var/run/dpdk/spdk_pid48535 00:32:27.417 Removing: /var/run/dpdk/spdk_pid48556 00:32:27.417 Removing: /var/run/dpdk/spdk_pid49021 00:32:27.417 Removing: /var/run/dpdk/spdk_pid49343 00:32:27.417 Removing: /var/run/dpdk/spdk_pid49732 00:32:27.417 Removing: /var/run/dpdk/spdk_pid54212 00:32:27.417 Removing: /var/run/dpdk/spdk_pid59262 00:32:27.417 Removing: /var/run/dpdk/spdk_pid71758 00:32:27.417 Removing: /var/run/dpdk/spdk_pid72582 00:32:27.417 Removing: /var/run/dpdk/spdk_pid77661 00:32:27.417 Removing: /var/run/dpdk/spdk_pid78015 00:32:27.417 Removing: /var/run/dpdk/spdk_pid83059 00:32:27.417 Removing: /var/run/dpdk/spdk_pid89823 00:32:27.417 Removing: /var/run/dpdk/spdk_pid92902 00:32:27.417 Clean 00:32:27.679 15:28:19 -- common/autotest_common.sh@1451 -- # return 0 00:32:27.679 15:28:19 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:32:27.679 15:28:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:27.679 15:28:19 -- common/autotest_common.sh@10 -- # set +x 00:32:27.679 15:28:19 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:32:27.679 15:28:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:27.679 15:28:19 -- common/autotest_common.sh@10 -- # set +x 00:32:27.679 15:28:19 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:27.679 15:28:19 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:27.679 15:28:19 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:27.679 15:28:19 -- spdk/autotest.sh@395 -- # hash lcov 00:32:27.679 15:28:19 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:27.679 15:28:19 -- spdk/autotest.sh@397 -- # hostname 00:32:27.679 15:28:19 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:27.979 geninfo: WARNING: invalid characters removed from testname! 00:32:54.571 15:28:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:54.571 15:28:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:57.116 15:28:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:58.501 15:28:50 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:59.886 15:28:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:01.803 15:28:53 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:03.190 15:28:55 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:03.190 15:28:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.190 15:28:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:03.190 15:28:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.190 15:28:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.190 15:28:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.190 15:28:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.190 15:28:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.190 15:28:55 -- paths/export.sh@5 -- $ export PATH 00:33:03.190 15:28:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.190 15:28:55 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:03.190 15:28:55 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:03.190 15:28:55 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721914135.XXXXXX 00:33:03.190 15:28:55 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721914135.oiBIp9 00:33:03.190 15:28:55 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:03.190 15:28:55 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:03.190 15:28:55 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:03.190 15:28:55 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:03.190 15:28:55 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:03.190 15:28:55 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:03.190 15:28:55 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:33:03.190 15:28:55 -- common/autotest_common.sh@10 -- $ set +x 00:33:03.190 15:28:55 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:03.190 15:28:55 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:03.190 15:28:55 -- pm/common@17 -- $ local monitor 00:33:03.190 15:28:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.190 15:28:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.190 15:28:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.190 15:28:55 -- pm/common@21 -- $ date +%s 00:33:03.190 15:28:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.190 15:28:55 -- pm/common@25 -- $ sleep 1 00:33:03.190 15:28:55 -- pm/common@21 -- $ date +%s 00:33:03.190 15:28:55 -- pm/common@21 -- $ date +%s 00:33:03.190 15:28:55 -- pm/common@21 -- $ date +%s 00:33:03.190 15:28:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721914135 00:33:03.190 15:28:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721914135 00:33:03.190 15:28:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721914135 00:33:03.190 15:28:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721914135 00:33:03.190 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721914135_collect-vmstat.pm.log 00:33:03.190 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721914135_collect-cpu-load.pm.log 00:33:03.190 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721914135_collect-cpu-temp.pm.log 00:33:03.190 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721914135_collect-bmc-pm.bmc.pm.log 00:33:04.134 15:28:56 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:04.134 15:28:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:04.134 15:28:56 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:04.134 15:28:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:04.134 15:28:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:04.134 15:28:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:04.134 15:28:56 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:04.134 15:28:56 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:04.134 15:28:56 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:04.134 15:28:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:04.134 15:28:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:04.134 15:28:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:04.134 15:28:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:04.134 15:28:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.134 15:28:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:04.134 15:28:56 -- pm/common@44 -- $ pid=496539 00:33:04.134 15:28:56 -- pm/common@50 -- $ kill -TERM 496539 00:33:04.134 15:28:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.134 15:28:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:04.134 15:28:56 -- pm/common@44 -- $ pid=496540 00:33:04.134 15:28:56 -- pm/common@50 -- $ kill -TERM 496540 00:33:04.134 15:28:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.134 15:28:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:04.134 15:28:56 -- pm/common@44 -- $ pid=496542 00:33:04.134 15:28:56 -- pm/common@50 -- $ kill -TERM 496542 00:33:04.134 15:28:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.134 15:28:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:04.134 15:28:56 -- pm/common@44 -- $ pid=496566 00:33:04.134 15:28:56 -- pm/common@50 -- $ sudo -E kill -TERM 496566 00:33:04.134 + [[ -n 4104996 ]] 00:33:04.134 + sudo kill 4104996 00:33:04.408 [Pipeline] } 00:33:04.426 [Pipeline] // stage 00:33:04.432 [Pipeline] } 00:33:04.450 [Pipeline] // timeout 00:33:04.457 [Pipeline] } 00:33:04.474 [Pipeline] // catchError 00:33:04.479 [Pipeline] } 00:33:04.497 [Pipeline] // wrap 00:33:04.503 [Pipeline] } 00:33:04.518 [Pipeline] // catchError 00:33:04.528 [Pipeline] stage 00:33:04.530 [Pipeline] { (Epilogue) 00:33:04.545 [Pipeline] catchError 00:33:04.547 [Pipeline] { 00:33:04.560 [Pipeline] echo 00:33:04.562 Cleanup processes 00:33:04.568 [Pipeline] sh 00:33:04.858 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:04.858 496654 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:04.858 497088 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:04.874 [Pipeline] sh 00:33:05.232 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:05.232 ++ grep -v 'sudo pgrep' 00:33:05.232 ++ awk '{print $1}' 00:33:05.232 + sudo kill -9 496654 00:33:05.245 [Pipeline] sh 00:33:05.532 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:17.785 [Pipeline] sh 00:33:18.071 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:18.071 Artifacts sizes are good 00:33:18.086 [Pipeline] archiveArtifacts 00:33:18.093 Archiving artifacts 00:33:18.285 [Pipeline] sh 00:33:18.571 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:18.587 [Pipeline] cleanWs 00:33:18.598 [WS-CLEANUP] Deleting project workspace... 00:33:18.598 [WS-CLEANUP] Deferred wipeout is used... 00:33:18.606 [WS-CLEANUP] done 00:33:18.609 [Pipeline] } 00:33:18.630 [Pipeline] // catchError 00:33:18.643 [Pipeline] sh 00:33:19.025 + logger -p user.info -t JENKINS-CI 00:33:19.036 [Pipeline] } 00:33:19.052 [Pipeline] // stage 00:33:19.058 [Pipeline] } 00:33:19.075 [Pipeline] // node 00:33:19.080 [Pipeline] End of Pipeline 00:33:19.117 Finished: SUCCESS